This tutorial is intended for developers new to SDN application development with OpenDaylight. We have come a long way from the earlier version of this tutorial to focus on MD-SAL, and Karaf in this tutorial. While OpenDaylight is not simply an OpenFlow controller, OpenFlow continues to be a popular south-bound and we use this in this tutorial to introduce the platform.

An introductory presentation that was made at the ONS 2015 is available at slideshare. The video recording is available YouTube (Part1, Part2).

1. Setup

To get started, download and set up the SDN Hub Tutorial VM in Virtualbox or VMware. This VM has a sample OpenDaylight project that will pull relevant dependent libraries from without having to actually clone the full OpenDaylight source code.

The tutorial application that we will work with is located in /home/ubuntu/SDNHub_OpenDaylight_tutorial directory. Open a Terminal tab (Ctrl-Shift-T) and go into the SDNHub_Opendaylight_Tutorial folder where we have included two sample applications that this tutorial will focus on: 1) a Hub / L2 learning switch, 2) a network traffic monitoring tap.

Before we start, we recommend you run the following commands to update the tutorial code, which is available at

2. Fundamentals for OpenDaylight programming

OpenDayLight uses the following software tools/paradigms. It is important to become familiar with them. The Service Abstraction Layer (SAL) is your friend for most development aspects.

  • Java interfaces: Java Interfaces are used for event listening, specifications and forming patterns. This is the main way in which specific bundles implement call-back functions for events and also to indicate awareness of specific state. Many of the interfaces are auto-generated using YANG tools.
  • Maven: OpenDayLight uses Maven for easier build automation. Maven uses pom.xml (Project Object Model for this bundle) to script the dependencies between bundles and also to describe what bundles to load on start.
  • OSGi: This framework in the backend of OpenDayLight allows dynamically loading bundles and packaged Jar files, and binding bundles together for information exchange.
  • Karaf: Karaf is a small OSGi based runtime which provides a lightweight container for loading different modules.

2.1 Maven and project building

Some basic understanding of maven is essential for working with OpenDaylight. Anytime you create a new project or module, or expand functionality of existing modules, you will have to appropriately upgrade the various pom.xml and feature.xml files.

Let’s start with building the the tutorial project in the VM using the following step.

This indicates that the build went successfully. In the event something failed, the build will stop at the module where it failed. All the above modules are important to make the full project to work. Here are some important notes on the build above:

  • “mvn” command uses Apache Maven to build the tutorial code. It compiles code based on the pom.xml file in that directory. “install” is essential for compilation. It also accepts an optional argument “clean” if you wish to clean the temporary build files.
  • The way Maven works is by resolving dependencies between packages. For instance, our example applications (the learning-switch and tapapp code we will edit) depends on the Opendaylight controller package and on the OpenFlowPlugin package. This triggers Maven to download the pre-compiled jar files from, resolve dependencies, and so on.
    • Have a look at the snapshot repository of OpenDaylight to get an idea of what’s out there.
    • Also look at ~/.m2/repository directory in your local machine. This is where maven places all compiled or downloaded Jar files. If you have build issues or karaf runtime import issues, you should look in this directory.
  • Creating a pom.xml with dependencies being downloaded is much, much faster than compiling the tutorial project within the full source of the Opendaylight controller. This can be compared with compiling an application instead of compiling the full operating system.
  • Maven has a module name and Java code has a package name. They do not need to match. The module learning-switch has following attributes. The Maven group-id and artifact-id are used for the name of the Jar file generated, while package name is only recognized within the source code.
    • pom.xml group-id: org.sdnhub.odl.tutorial
    • pom.xml artifact-id: learning-switch
    • Source code package name: org.sdnhub.odl.tutorial.learning.switch
  • Note: to speed up subsequent compilations, you can run “mvn install -DskipTests -DskipIT -nsu”. The “nsu” is short for no-snapshot-updates. It ensures that the compilation does not download definitions from
  • The main modules built are listed in the root pom.xml (located in ~/SDNHub_Opendaylight_Tutorial/pom.xml). Each child module and its respective child modules have their own pom.xml. All common properties (that includes versions, and common dependencies) are listed in common/parent/pom.xml. This pom.xml serves as the base for all other module pom.xml.
  • All these poms are individually built and finally combined by the distribution/pom.xml to prepare a running directory. The sample project’s distribution/opendaylight-karaf/pom.xml is specially crafted to include our sample project and specify one of them to be autoloaded when controller is started.

2.2 Karaf and feature creation

Now that we compiled our sample project, let’s run the controller itself; preferably in a different terminal

Here are some notes on the above process of starting the controller:

  • Running karaf starts all the Java bundles installed as jar files in the OSGi environment. Once all bundles are started, the createInstance() method in each implementation class will be be called, and the controller moves to an event-driven operation state.
  • The karaf shell is your main portal for managing all applications and the Java bundles. Press “Tab” to learn about all the CLI commands possible here
  • “feature:list” and “bundle:list -s” are commands that will help with look at active features and bundles in the Karaf runtime environment. ‘x’ means the feature is loaded and active. Work “Active” means the module/bundle is running, while “Resolved” would mean that it has been stopped, and “Installed” means it is blocked on some missing dependency.
  • There are also commands to install a feature (feature:install) or a specific bundle (bundle:install). By default karaf loads all the features listed in the distribution/opendaylight-karaf/target/assembly/etc/org.apache.karaf.features.cfg file. In our example, we autoload sdnhub-tutorial-tapapp feature.
  • Karaf and OSGi do not provide a way to specify which modules get precedence over other modules. So we use the config subsystem of OpenDaylight to enforce the ordering. More on that in the next subsection.

To drive karaf, every application built must have a feature description for the karaf shell to load it. The feature.xml file is where we define it. For instance, the sdnhub-tutorial-tapapp feature that is autoloaded, has the following corresponding feature description in the feature.xml file:

The above feature description dictates that the installation of the sdnhub-tutorial-tapapp features requires us to load all the following. Even if one of them cannot be loaded, the feature hangs:

  1. OpenFlow plugin,
  2. MD-SAL data broker,
  3. TapApp model,
  4. TapApp implementation,
  5. TapApp configuration for the config subsystem

If you create a new application, do not forget to include its description in the feature file.

2.3 Config subsystem

OpenDaylight has a built-in feature called the config subsystem that instantiates bundles in the appropriate ordering with the MD-SAL right dependencies pre-loaded. This is achieved by having a configuration file, typically named with a number in the beginning to denote the order of loading.

For instance, the tap application has a 50-tapapp-config.xml that is added to the karaf feature. This xml file is read at run-time and appropriate dependencies are injected.

To get your new application to load correctly, this config.xml, and the config augmentation with the implementation YANG file (e.g., tap-impl.yang) needs to have the right namespace, package name, and artifact id.

2.4 Mininet

Mininet is a network emulation software that works with Open vSwitch to create a set of OpenFlow switches connected to virtual hosts and interconnected in varying topologies. The tutorial VM has mininet installed already and can be kicked off in a new terminal with the following command:

You will now see a mininet CLI prompt after starting 3 hosts and 1 switch. The switch also attempts connect to a remote controller. Since we started OpenDaylight, you should also see several lines written to the karaf console once the switch connects to it.

You can begin a ping between two of the hosts. However, the ping will fail because there is no intelligence in the switch to learn the MAC addresses of each host and forward traffic to the correct switch ports.

Let us add that intelligence by installing the sdnhub-tutorial-learning-switch feature. You will notice that after doing this the ping will start succeeding.

ping still fails because there is no default rule in the switch to send packet-in messages to the controller. Let’s add that and verify the ping

You will have noticed that the pings are actually taking longer than they should for a switch (typically sub-1ms) for such a simple single-switch topology. This is because the controller is currently functioning in “Hub” mode and flooding every packet in the software.

In the next few sections, we will learn enough programming aspects to convert the hub into a switch and also build other functionality in the platform.

3. Introduction to OpenDaylight architecture

Before we jump into the code, a high-level overview of the OpenDaylight controller is in order. OpenDaylight is a modular platform with most modules reusing some common services and interfaces. Each module is developed under a multi-vendor sub-project. You can find the list of projects here.

The idea with building applications on the OpenDaylight platform is to leverage functionality in other platform bundles, each of which export important services through Java interfaces. Many of these services are build in a provider-consumer model over an adaptation layer called MD-SAL.

MD-SAL data store at the coreProgramming in OpenDaylight involves adopting the Model-View-Control approach for SDN application development:

  1. YANG Model for data, RPC and notifications
  2. REST API view autogenerated and accessible through RESTconf
  3. Java Implementation coded to handle data changes, notifications and RPC call backs

3.1 Model-driven SAL (MD-SAL)

Model-driven Service Adaptation Layer (MD-SAL), is the kernel of the platform where the different layers and modules are interconnected through well-defined API. Here are some important notes about MD-SAL:

  • Each API is generated from models defined in the YANG language during build time and loaded into the controller when the model bundle is loaded onto the Karaf platform.
  • At the core of the platform is a logically centralized data store that keeps relevant state in two buckets – 1) config data store, 2) operational data store.
  • All event calls and data go from “provider” to a “consumer” through this central datastore using MD-SAL mapping logic.

3.2 YANG model

OpenDaylight heavily uses YANG to model any data, notification or remote procedure call (RPC) that goes between different modules. For those unfamiliar with YANG, we recommend going over the YANG tutorial by Tail-F systems. In a nutshell, YANG is a language for describing the basic structure of some application data that is stored in a tree hierarchy within containers.

As an example, here is an excerpt of the YANG model for storing nodes (i.e., switches) and node-connectors (i.e., interfaces or ports) defined within opendaylight-inventory.yang:

Once this model is provided to the MD-SAL platform by appropriately including in the feature file and loading in Karaf, MD-SAL creates two data stores for this module: 1) Config data store, 2) Operational data store. The config data store is persisted by default across different runs of the controller (The config data store is stored in snapshots and journals directories in the karaf run location).

3.3 Instance identifiers

An application or external end-user can post data, either through MD-SAL transaction or RESTconf, to this data store. The individual objects are stored in a parent-child hierarchy and accessible through YANG instance identifiers. For the above YANG model, let’s say there is a node-connector “openflow:1:1” that is stored in the data store. One can access details of that node-connector by creating an instance identifier as follows:

Or, it could be accessed through RESTconf by going to the URL http://localhost:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:1, where the keyword “opendaylight-inventory” corresponds to the namespace of that module, and “nodes” corresponds to the container at the root level of the tree.

Useful tip: In case you are provided with an instance identifier, it is possible to extract the instance identifier or key of the parent objects by performing firstKeyOf() or firstIdentifierOf() as follows:

3.4 Data store Transactions

Once an instance identifier is created, a read or write transaction can be performed to that location in the data store using the DataBroker service. There are two classes (viz., WriteTransaction and ReadOnlyTransaction) that can help you with this.

Let’s say we want to write a NodeConnector object to the data store, we use the following code:

Most transactions return a Java Future object. It is important to check on the status of this Future object to see if the transaction succeeded and if there are any specific outputs generated.

Let’s say we want to read a NodeConnector object from the data store, we use the following code:

Each transaction consumes a certain amount of CPU resources and further more when there is a cluster consistency being enforced. This can limit the number of transactions per second. In OpenDaylight MD-SAL, one can perform batch transactions and transaction chains to get better read and write perform. These are out of scope of this tutorial.

3.5 Advanced YANG operations

Besides basic data models as shown above, it is possible to define following constructs in YANG models to model additional services between modules:

  • Augmentations: Unless specified otherwise, YANG models can be expanded, in a new namespace, to add extra data within an existing data model. Note that the augmentations must be declared at build time.
    • For instance, OpenFlow rules are augmented over the Node object shown above to create the flow table augmentation. Here is how the augmented tree looks like.


  • Notifications: These are also called as YANG-notifications. It is used to oublish one or more notifications with modeled-data to registered listeners. Defining a notification in YANG generates Java interfaces that potential listeners can “implement” and receive the appropriate callback.
    • In the tap application in our sample project, we illustrate listening to the node and node-connector updates and removal.Following is code written to listen for notifications defined in the opendaylight-inventory.yang. OpendaylightInventoryListener is an autogenerated interface that defines the callback function called onNodeUpdated().

  • Remote Procedure Call (RPC): MD-SAL allows one module to perform a procedure call with input/output to another module, without worrying about who the actual provider for that procedure is. MD-SAL handles the appropriate routing to map the caller and the callee.
    • In the tap application in our sample project, we illustrate performing a RPC call to clear all OpenFlow rules on the underlying switch. Unlike a notification, RPC calls have a unique recipient.Following is code written to perform a RPC call defined in the sal-flow.yang part of the OpenFlowPlugin project. First step is to object a reference to the service project, and second step is to perform the YANG-defined function call with the appropriate input.

3.6 Northbound and Southbound plugins

generic-controller-architectureTypically plugins are code abstractions used to integrate OpenDaylight with external systems. Following are some of the common plugins that app developers need to be familiar with.

  • Northbound plugins: The two most commonly used northbound plugins are RESTconf (enabled by installing odl-restconf) and NETCONF (enabled by installing odl-netconf-mdsal feature) interface to MD-SAL. Both of these can be used in combination with an intent layer.
  • Southbound plugins: OpenFlow plugin (enabled by installing odl-openflowplugin-southbound feature) and NETCONF connector (enabled by installing odl-netconf-connector-all feature) are two commonly used southbound plugins.
    • The rest of the tutorial builds applications using OpenFlow plugin, wherein OpenFlow FLOW_MOD, PACKET_IN, PACKET_OUT and other discovery operations are supported as YANG modeled data changes, notifications or RPC calls.
    • For an example of an application or example using NETCONF connector, look at the post on experimenting with NETCONF using the netconf-exercise application.

4. Basic steps to writing an OpenDayLight Application

For beginners, we recommend using Eclipse for viewing and editing the source code, and command-line for compiling and running the controller. Open Eclipse using the desktop shortcut.

We have already setup the Eclipse environment for you. But, if you wish to do it from scratch, you should follow these instructions to setup Maven-Eclipse integration. After that you can and go to File->Import. Choose ‘Maven > Existing Maven Projects’. On the next screen, choose as the Root Directory ~/ubuntu/SDNHub_Opendaylight_Tutorial. Select ‘learning-switch’ and ‘tapapp’ implementation pom.xml that it finds, click on ‘Add project(s) to working set’ below, and click ‘Finish’.

Once Eclipse is setup, you should now see ‘learning-switch’ and ‘tapapp’ implementation folders on your Project explorer pane. Double-click on one of the two and src/main/java -> org.sdnhub.odl.tutorial.[learning-switch or tapapp].impl and walk-through through the code.

Step 1: Define data model

The first step to developing an application with the OpenDaylight framework is to visualize the state maintained by your application and modeling that in the YANG language.

If you inspect tapapp/model/src/main/yang/tap.yang, you will see an example YANG module, where the container for archiving the tap configurations is defined. The container has a list within it to track each tap configuration. Each tap configuration contains information about the source, sink, type of traffic to capture, src/dst IP address, and src/dst MAC address.

Once you add the model, you can build the project and run Karaf to immediately verify if your model is sufficient for your application. For instance, when we run karaf, even without any implementation or event handlers defined, you will be able to store data in the data store as follows:

You can open a browser and inspect the data store at http://localhost:8181/restconf/config/tap:tap-spec. This is a big advantage with MD-SAL and developing applications with OpenDaylight, wherein you can directly access the data store using REST right after a model is defined and includes in the runtime of karaf. Here is a postman collection you can use for the tap creation and deletion. That collection also contains some sample OpenFlowPlugin REST calls for flow programming.

Step 2: Activation

In OpenDaylight, we add activation code within the createInstance() method of the YANG generated implementation module class. For instance, if you look at the auto-generated, you will see the createInstance() method with code written to create necessary helper classes. Since most of OpenDaylight applications will primarily interact with the MD-SAL, it is important to extract references to the following necessary services:

  1. DataBroker: As mentioned earlier, this object is necessary for read or write transaction with the data store
  2. NotificationProviderService: This is the service to which any listener needs to register to receive YANG defined notifications
  3. RpcProviderRegistry: This service helps connect a RPC provider with the consumer by providing the consumer with an instance of the RPC interface.

Step 3: Event handlers and other call backs

Once you have a model defined and the class activated, the application should perform one or more of the following, typically in the constructor of the helper class:

  1. register for data change notifications with the MD-SAL data broker to receive onDataChanged() callback
  2. register for YANG notifications with the MD-SAL notification service to receive custom callbacks
  3. register as a provider for certain RPC calls if this application is a provider
  4. obtain access to the relevant RPC provider interfaces to make calls

Then, the helper class will implement all necessary logic within the call back function for the event handlers, and RPC provider functions. The application can also publish notifications of its own to other listeners using the notificationProviderService.publish() call.

In the sample, you can see the skeleton for the callback handler included along with some debug logs. Whenever you perform a tap config addition to the data store, you can see what is sent to the onDataChanged() callback.

5. Sample application

5.1 Tap application

The traffic monitoring tap application is a simple proactive flow programmer that deals with source, sinks and traffic types. We will follow the steps described earlier to build this application. The model of the tap configuration data was described earlier in Step 1. Based on that model, we perform following to make the tap work:

  1. Extract header details from the tap object during the onDataChanged() event handling
  2. For each source-port, perform following steps to create a flow
    1. Create match object using appropriate builders
    2. Create action list with a list of actions specifying output to sink-port
    3. Create flow object with match and action list. Write this flow object to the Flow table of the node

Note that the traffic type is a special enum field defined in the YANG model to allow user to specific important traffic types worth monitoring, like ARP, ICMP, DNS, DHCP, TCP, UDP. For converting the enum to actual values for the dl_type, nw_proto and tp_port, one can use the following switch statement:

In the skeleton code provided in the in the tapapp/impl directory, you will see the activation code already included. You can complete the rest based on the above logic.

5.2 Learning Switch

For the purposes of this tutorial, you should attempt to convert the hub learning switch to a MAC learning switch that programs flows. We encourage you to add the necessary code within The main logic for hub and learning switch are available here. At a high-level here are the steps to perform:

  1. Ignore LLDP packets
  2. If behaving as “hub”, perform a PACKET_OUT with FLOOD action
  3. Else if behaving as “learning switch”,
    1. Extract MAC addresses
    2. Update MAC table with source MAC address
    3. Lookup in MAC table for the target node connector of dst_mac
      1. If found, perform FLOW_MOD for that dst_mac through the target node connector, and perform PACKET_OUT of this packet to target node connector
      2. If not found, perform a PACKET_OUT with FLOOD action

5.3 Your homework

  1. Complete the learning switch to perform flow programming such that the Open vSwitch switch “s1” will have a single rule matching the destination MAC address and the latency of the ping between h1 and h2 will be < 1ms, and you will see output as follows in the mininet window.
  2. Convert the MAC table in the from a local hashmap to an augmented data in the MD-SAL data store. Perhaps within the openDaylight-inventory root like follows:
  3. Complete the tap application onDataChanged() so that the OpenFlow rules are programmed when tap configurations are received. For instance, once the data is inserted using the curl (like in the example above), the rule will be pushed to the switch underneath as follows:

5.4 Solutions

In the impl directory of the two applications, you will see *.solution files that implement the two applications. Copy them over the .java files and recompile to see it in action.

5.5 Extra credit

  1. Extend the learning switch to work with multiple switches. The solution code only works with single switch. In mininet, you can spawn a larger tree topology using following:
  2. Support matching tap traffic in both direction, such that the rule programmed will look as follows:
  3. Define a RPC call in YANG called “create-tap” and “delete-tap”, and make the TutorialTapProvider class as a provider for those RPC calls.
  4. Save the IP and TCP header information data, derived from the traffic type, as augmentation to the tap configuration (Note: The tap.yang already defines this in the bottom)
  5. Test persistence of tap configuration across reboot.

6. Other useful Features

The OpenDaylight distribution has some pre-written applications that will help develop and debug your application. Previously, we learned about how useful the odl-restconf-noauth feature can be in inspecting the data store. Besides that, we encourage you to also play with the following features:

  • NETCONF client: Install the odl-netconf-connector-all feature to enable the OpenDaylight controller to configure and manage underlying devices using NETCONF protocol; typically, devices (like routers) with traditional control plane provide this NETCONF option for injecting configuration and performing RPC operations on the device. Here is a tutorial post for experimenting with NETCONF support and using that to orchestrate non-OpenFlow devices.
  • l2switch: Has code for ARP handlers, host tracking, more complex L2 switch. You can model your applications after this. Alternatively, you can install this odl-l2switch-all feature to test your switch before doing other applications.
  • dlux: Dlux is the AngularJS-based web UI of OpenDaylight. It has several build in modules to visualize the data cached in the data store. Install odl-dlux-all feature and visit http://localhost:8181/dlux
  • Link discovery: Although we have used OpenFlow plugin in both the above applications, we did not use all its features. The plugin also tracks topology and adds it to the network-topology.yang model. You can view all LLDP discovered links from http://localhost:8181/restconf/operational/network-topology:network-topology. (Quiz: How to get notifications on new link additions? Answer: Register a data change listener for the link instance identifier)
  • apidocs: OpenDaylight provides an interface to get a list of all the defined data models, notifications and RPC calls at runtime. To get list of all APIs exported over RESTConf, you can use the API explorer. Install feature odl-mdsal-apidocs and see the list of API by accessing http://localhost:8080/apidoc/explorer/. On the right is an example of list of API.We encourage you to play with this data access and possibly create new flows through this REST interface. Once you create a new flow, you can see it in the Open vSwitch table using the command:

7. Debugging

Debugging is an important part of app development. With Eclipse (and other IDE), you will be able to connect to the karaf instance and perform a step-wise debug of your code. To debug your code, you need to start OpenDaylight including the command like parameter “debug”, which will allow IDE (e.g., Eclipse) to connect to Karaf (listening at port 5005) for debugging purposes.

Once you kick off karaf in debug mode, you can connect to that JVM from Eclipse by clicking on Debug -> Configurations and adding a new Remote Java Application as shown in the figure below. Once you have that entered and connected to the JVM, you can add breakpoints in your code to break at specific lines.
If execution breaks at the line, you should be able to see the Debug tab, where you can decide how to proceed and debug.

2 comments on “OpenDaylight Application Developer’s tutorial

  1. on December 6, 2015
    jabbson says:

    For those struggling with “unable to login” and “Insufficient roles/credentials for operation” – you can fix this error by doing “mvn clean” before “mvn install -nsu”, OR simply by doing “mvn clean install -nsu”

  2. on December 16, 2015
    Martin Wilck says:

    I was struggling with another problem that others have reported for Lithium, too – ODL never opened port 6633. “mvn clean” solved that problem, too.

Leave a Reply