Quantcast
Channel: SDN Hub » tutorials

Opendaylight Toolkit: Roll-your-own Opendaylight App

$
0
0

This is a quick introduction to the Opendaylight Toolkit. We will not go over Opendaylight basics here – please refer to our intro tutorial for an overview and intro to Opendaylight.

1. Why Opendaylight Toolkit?

Opendaylight is highly modular and highly customizable. However, the existing release version of the Opendaylight controller may not suit everyone’s needs. The Hydrogen release controller loads nearly 250 bundles on startup, which not only requires more resources than necessary, but also requires running modules that might interfere with what your application intends to do.

To fix this issue, a couple of Opendaylight Developers – Madhu Venugopal of RedHat, Andrew Kim of Cisco, and some others – came up with the idea of a “lean and mean” Opendaylight distribution that could be adapted to the specific needs  of the app developers. The result is the Opendaylight toolkit.  Toolkit lets you get started with simple app skeletons that you can adapt to your needs, which it does using the concept of Maven archetypes. Archetypes are “templates” of applications that can be used to generate your own application with associated Northbound and Web interfaces.

In this post, we will give a quick overview of getting up a simple archetype with a northbound and web, and a L2 learning switch archetype that shows the learned MAC table of the switch.

2. Getting started with Opendaylight Toolkit

To run toolkit, you will need Java 1.7, Maven, etc. Please make sure you have your environment set up, or instead use the SDN Hub VM.

Step 1

To start off, clone the git repository from here. This will create a repository called toolkit. This will create a folder called toolkit with the following contents

ubuntu@ubuntu:~/toolkit$ ls
common  main  pom.xml  README.md  web

Step 2

The main folder contains a folder called ‘archetypes’, which contains a variety of templates (some still incomplete). Let us get started using a ‘simple-app’ archetype. First, we need to let Maven know about this specific archetype so that we can generate apps out of this archetype.

ubuntu@ubuntu:~/toolkit$ cd main/archetypes/archetype-app-simple
ubuntu@ubuntu:~/toolkit/main/archetypes/archetype-app-simple$ mvn clean install

...

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.889s
[INFO] Finished at: Sun May 04 10:32:59 PDT 2014
[INFO] Final Memory: 11M/28M
[INFO] ------------------------------------------------------------------------

Step 3

Now let’s generate an app from the archetype we just installed to the local catalog of archetypes. Note that Maven now knows of the archetype and asks us to choose the ‘app-simple’ archetype to use.

ubuntu@ubuntu:~/toolkit/main/archetypes/archetype-app-simple$ cd ../../../ # back to toolkit/ folder
ubuntu@ubuntu:~/toolkit$ mvn archetype:generate -DarchetypeCatalog=local
...

Choose archetype:
1: local -> org.opendaylight.toolkit:toolkit-app-simple-archetype (opendaylight-toolkit-app-simple-archetype)
Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): : 1
Define value for property 'groupId': : org.sdnhub
Define value for property 'artifactId': : simpleapp
Define value for property 'version':  1.0-SNAPSHOT: :
Define value for property 'package':  org.sdnhub: :
Define value for property 'REST-Resource-Name': : simpleapp
Confirm properties configuration:
groupId: org.sdnhub
artifactId: simpleapp
version: 1.0-SNAPSHOT
package: org.sdnhub
REST-Resource-Name: simpleapp
Y: : Y

...

[INFO] project created from Archetype in dir: /home/ubuntu/toolkit/simpleapp
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] common ............................................ SKIPPED
[INFO] web ............................................... SKIPPED
[INFO] main .............................................. SKIPPED
[INFO] opendaylight-toolkit .............................. SUCCESS [4:14.385s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4:16.125s
[INFO] Finished at: Sun May 04 10:39:59 PDT 2014
[INFO] Final Memory: 17M/42M
[INFO] ------------------------------------------------------------------------

Step 4

ubuntu@ubuntu:~/toolkit$ ls common  main  pom.xml  README.md  simpleapp  web

As we can see, a simpleapp folder has been created in the top-level directory.

Step 5

Now we have one extra step to get going. There are some web resources (CSS, JS) that we need to download. This requires you to set up node.js and npm (node package manager) and get a web package manager called ‘bower’.

If you have bower installed:

ubuntu@ubuntu:~/toolkit$ cd web/src/main/resources/
ubuntu@ubuntu:~/toolkit/web/src/main/resources/$ for i in css js; do cd $i && bower install && cd -; done

If you do not have bower installed:

ubuntu@ubuntu:~/toolkit/web/src/main/resources$ cd ..
ubuntu@ubuntu:~/toolkit/web/src/main/$ wget https://www.dropbox.com/s/i652ysfb9rg6qx7/odl-toolkit-web-resources.zip
ubuntu@ubuntu:~/toolkit/web/src/main/$ unzip odl-toolkit-web-resources.zip

After either step, please ensure that both resources/css and resources/js folders have an ‘ext/’ folder.

Step 6

Now we’re reaedy to compile the controller and simpleapp. From the toolkit/ folder, do

ubuntu@ubuntu:~/toolkit/$ mvn clean install

...

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] common ............................................ SUCCESS [11.835s]
[INFO] web ............................................... SUCCESS [10.150s]
[INFO] main .............................................. SUCCESS [20.155s]
[INFO] simpleapp ......................................... SUCCESS [0.803s]
[INFO] opendaylight-toolkit .............................. SUCCESS [0.161s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 44.374s
[INFO] Finished at: Sun May 04 11:00:46 PDT 2014
[INFO] Final Memory: 33M/86M
[INFO] ------------------------------------------------------------------------

ubuntu@ubuntu:~/toolkit/$ mvn clean install

This will build and install the controller and a minimal set of packages to main/target/

Step 8 – Run the controller

To run the controller,

ubuntu@ubuntu:~/toolkit/$ cd main/target/main-osgipackage/opendaylight
ubuntu@ubuntu:~/toolkit/main/target/main-osgipackage/opendaylight$ ./run.sh

Step 9 – Check out the web interface

After 30 seconds, proceed to http://localhost:8080. You will see the web interface (login admin:admin), and be greeted with a page with instructions on how to install apps. As the page says, apps will be displayed on the bar at the top. Currently, the app we generated has not been installed yet. To install it, let’s build it specifically while keeping the controller running

Step 10 – Build simpleapp

ubuntu@ubuntu:~/toolkit/$ cd simpleapp; mvn clean install

Step 11 – Verify simpleapp has been installed

simpleapp
After the app is built, go back to the web console and refresh the page. You will see that  simpleapp now shows up on the omnibar.

3. Tweaking Simpleapp

Simpleapp has a few components: The simpleapp.northbound specifies a number of HTTP-based API endpoints to list, add, update, and remove data from the SimpleData backend. The frontend is rendered using Backbone.js, which is a Javascript-based MVW framework that talks to the northbound and renders data in the frontend. Currently, simpleapp does not have any functionality to talk to network devices or switches – you can add your own southbound interfaces, or check out the next section for a pre-built learningswitch.

4. L2 Learning Switch inside Opendaylight Toolkit

To make it easier to interact with Openflow switches within ODL Toolkit, we have created a sample app. Note that this is not an archetype yet, so you will need to manually rename classes, files and directories. To install this app within toolkit:

Step 1: Check out toolkit-learningswitch from Github

ubuntu@ubuntu:~/toolkit$ git clone https://github.com/oakenshield/toolkit-learningswitch.git
ubuntu@ubuntu:~/toolkit$ cd toolkit-learningswitch; mvn clean install

Step 2: Ensure that the switch bundle has been installed to the running controller

Go to the tab with the controller running, and ensure that the learningswitch bundle has been loaded

osgi> ss learn
"Framework is launched."
id    State       Bundle
61    ACTIVE      org.sdnhub.learningswitch_1.0.0.SNAPSHOT

If you don’t see this or if there are exceptions, kill and restart the controller (this bug is being worked on).

Step 2: Ensure that the switch works using Mininet

In another tab, create a mininet topology and hook it up to the controller. Assuming you are running the controller and the switch on the SDNHub VM,

ubuntu@ubuntu~$ sudo mn --arp --topo single,3 --switch ovsk --controller remote

...

mininet> h1 ping h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.266 ms
^C

Step 3: See MAC -> port table on the web console.

Go to the web console and ensure that a third tab has appeared on the omnibar. When you go to this tab, you will see a barebones UI which shows the MAC table for the learningswitch. As before, the frontend is rendered by Backbone.js and the mac and flow tables are exported by AppNorthbound.java.
learningswitch

5. Finding your way around the code

In the Learningswitch code,

  • Rules are programmed are inside src/main/java/org/sdnhub/learningswitch/
    internal/Learningswitch.java
  • Frontend is rendered in src/main/resources/. Specifically, the basic page template is in src/main/resources/WEB-INF/jsp/main.jsp ; the backbone view is in src/main/resources/js/views, Backbone models are in models/, and page templates in templates/ are rendered using Underscore.js.
  • There’s no need to use Backbone.js. You may completely avoid Backbone by going to js/app.js and writing your own custom JS or use another front-end framework.

 


Opendaylight Development with Eclipse and Toolkit

$
0
0

This tutorial requires Opendaylight Toolkit – please ensure you have Toolkit set up and your controller running.Before proceeding, please ensure you have Eclipse 3.9. Though these steps work in Eclipse 3.8, there seem to be occasional errors with directory paths. Additionally, you will need the m2e plugin to let Eclipse read Maven projects.

Importing projects into Eclipse

In Eclipse, do File->Import ->Maven->Existing Maven Projects. Point it to the top-level of your toolkit folder, e.g., /home/ubuntu/toolkit.

Uncheck the top-level pom.xml, but choose everything else underneath. Say finish, and wait for a few minutes for Eclipse to finish importing and indexing everything.

Screenshot - 05042014 - 11:57:24 AM

If you’ve already built the controller, the dependencies will have been already pulled to ~/.m2/repository, so your projects should not show any errors.

Build the top-level controller

To build the top-level controller, open the main project (double-click), choose the pull-down from the run button, and click on opendaylight-assembleit-skiput. The “ut” stands for skip updates and tests.

skiput

If the run button is not enabled/present, you may need to open some Java file.

Build a submodule

Building the top-level controller does not build apps that you created yourself. To do this, choose your app folder, and choose the opendaylight-assembleit-fast target from the Run or Debug menus

fast

Running the Controller

Once the main controller and any apps are built, we can run it within Eclipse the opendaylight-controller target:

controller

 

Docker Networking

$
0
0

Recently we fiddled with several of the network settings in native Docker and put together a survey presentation. It feels fascinating to realize that overlay networking is being revisited once again in the context of containers/Docker.

For those not familar with Docker, it is sufficient to understand that Docker is an easy way to rollout applications within Linux containers and manage their configurations in a portable fashion. While the LXC technology does not provide the resource level isolation that virtual machines (VM) had with the hypervisor, it provides isolation that may be sufficient for the majority of the applications, especially in enterprise infrastructure.

docker0-networkToday, Docker picks Linux bridge as the default network for each spawned container and allocates a static IP address from a default range. The container traffic is then NAT’ed through the host network. This is what one would have if you spawn VMs with KVM or VirtualBox on your laptop. As container usage gets more complicated to span multiple hosts, multiple application tiers and multiple datacenters, we need good underlying plumbing and well understood abstractions

It is certainly possible to use same networking principles as was used with VMs, or adopt a clean slate thinking for container communication. In the VM world, each individual entity was addressable by a MAC/IP address combination. In the container world, however, it is possible that the containers are not individually addressable, and share the host network or use other means for communicating (e.g., Unix domain sockets).

net-virt-abstractionWhile the technology and mode of plumbing is up for debate, some of the abstractions will still work in the container space. The foremost of that is network virtualization, the act of decoupling the application’s virtual network from the underlying physical network. This abstraction brings in modularity in deployment, portability of groups of containers, access control across application boundaries, and plumbing for virtual network services. We need that in the container world too.

Experimenting with ONOS clustering

$
0
0

ONOS, Open Network Operating System, is a newly released open-source SDN controller that is focused on service provider use-cases. Similar to OpenDaylight, the platform is written in Java and uses Karaf/OSGi for functionality management. Recently we experimented with this controller platform and put together a basic ONOS tutorial that explores the platform and its current features.

For experimenting with ONOS, we provide two options for obtaining a pre-compiled version of ONOS 1.1.0:

  1. Our tutorial VM that starts the ONOS feature onos-core-trivial when karaf is run.
  2. A Docker repository sdnhub/onos where containers start the ONOS feature onos-core when karaf is run. (Alternatively, here is the Dockerfile)

Clustering

In this post, we experiment more with the clustering feature where multiple instances of the controller can be clustered to share state and manage a network of switches. ONOS uses Hazelcast for clustering multiple instances. To experiment with clustering, we perform the following commands to spin up three containers running the ONOS controller in daemon mode:

$ sudo docker pull sdnhub/onos
$ for i in 1 2 3; do 
  sudo docker run -i -d --name node$i -t sdnhub/onos;
done
$ sudo docker ps
CONTAINER ID        IMAGE                COMMAND                PORTS                NAMES
7a6e01a3c174        sdnhub/onos:latest   "./bin/onos-service"   6633/tcp, 5701/tcp   node3               
92cdc5713b6a        sdnhub/onos:latest   "./bin/onos-service"   5701/tcp, 6633/tcp   node2               
7731779512b4        sdnhub/onos:latest   "./bin/onos-service"   6633/tcp, 5701/tcp   node1

Hazelcast uses IP multicast to find the other member nodes. In the above system, you can check who the members are by connecting to one of the Docker containers:

$ sudo docker attach node1
onos> feature:list -i | grep onos
Name                 | Installed | Repository          | Description                                       
-----------------------------------------------------------------------------------------------------
onos-thirdparty-base | x         | onos-1.1.0-SNAPSHOT | ONOS 3rd party dependencies                       
onos-thirdparty-web  | x         | onos-1.1.0-SNAPSHOT | ONOS 3rd party dependencies                       
onos-api             | x         | onos-1.1.0-SNAPSHOT | ONOS services and model API                       
onos-core            | x         | onos-1.1.0-SNAPSHOT | ONOS core components                              
onos-rest            | x         | onos-1.1.0-SNAPSHOT | ONOS REST API components                          
onos-gui             | x         | onos-1.1.0-SNAPSHOT | ONOS GUI console components                       
onos-cli             | x         | onos-1.1.0-SNAPSHOT | ONOS admin command console components             
onos-openflow        | x         | onos-1.1.0-SNAPSHOT | ONOS OpenFlow API, Controller & Providers         
onos-app-fwd         | x         | onos-1.1.0-SNAPSHOT | ONOS sample forwarding application                
onos-app-proxyarp    | x         | onos-1.1.0-SNAPSHOT | ONOS sample proxyarp application                  
onos> 
onos> summary
node=172.17.0.2, version=1.1.0.ubuntu~2015/02/08@14:04
nodes=3, devices=0, links=0, hosts=0, SCC(s)=0, paths=0, flows=0, intents=0
onos> masters 
172.17.0.2: 0 devices
172.17.0.3: 0 devices
172.17.0.4: 0 devices
onos> nodes
id=172.17.0.2, address=172.17.0.2:9876, state=ACTIVE *
id=172.17.0.3, address=172.17.0.3:9876, state=ACTIVE 
id=172.17.0.4, address=172.17.0.4:9876, state=ACTIVE 

We can see three controllers clustered when we list the summary and nodes. The star next to 172.17.0.2 designates that node as the leader for the state.

Now we start mininet emulated network with 2 switches, and point them to the controller running in container node2. Since the sample forwarding application is installed, you will see that the ping between two hosts succeeds.

$ sudo mn --topo linear --mac --switch ovsk,protocols=OpenFlow13 --controller remote,172.17.0.3
mininet> h1 ping h2 -c 1
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms

We can visit the web UI of any of the controllers (e.g., http://172.17.0.2:8181/onos/ui/index.html) and get the same information in a graphical format, as shown on the right side.onos-clustering-ui.

OpenFlow Roles and Fault tolerance

So far, all three controller are in active mode, with only the controller in node2 being the OpenFlow controller used for the devices. Let’s change that and verify that the system can function with multiple controllers with one being the OpenFlow master and others being the slave.

  • Add all controllers to all switches/devices, while mininet is running
$ sudo ovs-vsctl set-controller s1 tcp:172.17.0.4:6633 tcp:172.17.0.3:6633 tcp:172.17.0.2:6633
$ sudo ovs-vsctl set-controller s2 tcp:172.17.0.4:6633 tcp:172.17.0.3:6633 tcp:172.17.0.2:6633
$ sudo ovs-vsctl show
873c293e-912d-4067-82ad-d1116d2ad39f
    Bridge "s1"
        Controller "tcp:172.17.0.3:6633"
            is_connected: true
        Controller "tcp:172.17.0.4:6633"
            is_connected: true
        Controller "tcp:172.17.0.2:6633"
            is_connected: true
        fail_mode: secure
        Port "s1-eth2"
            Interface "s1-eth2"
        Port "s1"
            Interface "s1"
                type: internal
        Port "s1-eth1"
            Interface "s1-eth1"
    Bridge "s2"
        Controller "tcp:172.17.0.3:6633"
        Controller "tcp:172.17.0.2:6633"
        Controller "tcp:172.17.0.4:6633"
        fail_mode: secure
        Port "s2-eth2"
            Interface "s2-eth2"
        Port "s2-eth1"
            Interface "s2-eth1"
        Port "s2"
            Interface "s2"
                type: internal
    ovs_version: "2.3.90"
  • Verify ONOS state from the Karaf CLI, especially check the updated list of OpenFlow roles
$ sudo docker attach node1
onos>  masters
172.17.0.2: 0 devices
172.17.0.3: 1 devices
  of:0000000000000001
172.17.0.4: 1 devices
  of:0000000000000002
onos>  roles
of:0000000000000001: master=172.17.0.3, standbys=[ 172.17.0.4 172.17.0.2 ]
of:0000000000000002: master=172.17.0.4, standbys=[ 172.17.0.3 172.17.0.2 ]
  • Now let’s bring down node2, and see what happens.
$ sudo docker stop node2
$ sudo docker attach node1
onos>  nodes
id=172.17.0.2, address=172.17.0.2:9876, state=ACTIVE *
id=172.17.0.3, address=172.17.0.3:9876, state=INACTIVE 
id=172.17.0.4, address=172.17.0.4:9876, state=ACTIVE 
onos>  masters
172.17.0.2: 0 devices
172.17.0.3: 0 devices
172.17.0.4: 2 devices
  of:0000000000000001
  of:0000000000000002
onos>  roles
of:0000000000000001: master=172.17.0.4, standbys=[ 172.17.0.2 ]
of:0000000000000002: master=172.17.0.4, standbys=[ 172.17.0.3 172.17.0.2 ]

We notice that the controller in node3 has become the master OpenFlow controller for all two switches. We also noticed that the ping (h1 ping h2) succeeds in the mininet system! We further removed node1 and verified that the system still functions fine with 1 controller and 1 state keeper. This is the power of clustering.

Experimenting with NETCONF connector in OpenDaylight

$
0
0

netconf-managing-routersOn the southbound, OpenDaylight has support for both NETCONF and OpenFlow for managing underlying devices. This allows your applications to manage both traditional devices with in-built control planes and the OpenFlow-enabled devices with decoupled control planes. End-to-end control and orchestration is, thus, easily achieved using a single platform.

1. Setup

In this post we get an understanding of the NETCONF support through a simple experiment with netopeer‘s NETCONF server packaged as a Docker image. Here are steps to get a hands-on experience into using NETCONF:

Step 1: Install Docker and netopeer image.

$ wget -qO- https://get.docker.com/gpg | sudo apt-key add -
$ wget -qO- https://get.docker.com/ | sudo sh
$ sudo docker pull sdnhub/netopeer

Step 2: Spin up simulated router instances. We now spin up two simulated router instances (

router1
and
router2
) that can be configured over NETCONF. (Assuming you have the SDNHub_OpenDaylight_Tutorial folder cloned)

$ git clone https://github.com/sdnhub/SDNHub_Opendaylight_Tutorial/
$ cd SDNHub_Opendaylight_Tutorial/netconf-exercise
$ ./spawn_router.sh router1
...
Spawned container with IP 172.17.0.63
$ ./spawn_router.sh router2
...
Spawned container with IP 172.17.0.64

These two simulated routers use the following custom YANG model to model some basic configurations of typical routers.

module router {
    yang-version 1;
    namespace "urn:sdnhub:odl:tutorial:router";
    prefix router;

    description "Router configuration";

    revision "2015-07-28" {
        description "Initial version.";
    }

    list interfaces {
        key id;
        leaf id {
            type string;
        }
        leaf ip-address {
            type string;
        }
    }
    container router {
        list ospf {
            key process-id;
            leaf process-id {
                type uint32;
            }
            list networks {
                key subnet-ip;
leaf subnet-ip {
                    type string;
                }
                leaf area-id {
                    type uint32;
                }
            }
        }
        list bgp {
            key as-number;
            leaf as-number {
                type uint32;
            }
            leaf router-id {
                type string;
            }
            list neighbors {
                key as-number;
                leaf as-number {
                    type uint32;
                }
                leaf peer-ip {
                    type string;
                }
            }
        }
    }
}

Step 3: Start OpenDaylight and register the two NETCONF devices. In this example, we use the pre-compiled OpenDaylight in the tutorial VM. Ensure that you install the

odl-netconf-connector-all
feature

$ cd ~/SDNHub_Opendaylight_Tutorial
$ mvn clean install -nsu
$ cd distribution/opendaylight-karaf/target/assembly
$ ./bin/karaf 
opendaylight-user@root> feature:install odl-netconf-connector-all

After waiting about 30 secs, register the two NETCONF-enabled routers with OpenDaylight using shell scripts provided. Ensure that the IP address below is updated to what was printed by the spawn_router script above.

$ ./register_netconf_device.sh router1 172.17.0.63; sleep 5
$ ./register_netconf_device.sh router2 172.17.0.64; sleep 5

2. Using RESTconf to manage the devices

Now that you have registered the two NETCONF speakers, you will be able to get and edit configurations, and perform RPC calls from the controller

  • You can see all the YANG models support by that device at this location.
  • For instance, for each router we spawned, you see the following capability in that list:
    <available-capability>
    (urn:sdnhub:odl:tutorial:router?revision=2015-07-28)router
    </available-capability>
  • This works only if your device has support for
    ietf-netconf-monitoring
    , wherein devices export their capabilities or modules to the NETCONF client.
  • If you are attempting to manage a device that does not support
    ietf-netconf-monitoring
    , there are other ways to add the YANG models directly at the controller.

We can start posting configurations from the OpenDaylight controller to the NETCONF servers in the routers using the router YANG model shown above. For instance, we can inject configurations to

router1
using RESTCONF as follows:
$ curl -u admin:admin -v -X PUT -H "Content-Type: application/xml" -d '
      <interfaces xmlns="urn:sdnhub:odl:tutorial:router">
        <id>eth0</id>
        <ip-address>10.10.1.1/24</ip-address>
      </interfaces>
' http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/router1/yang-ext:mount/router:interfaces/eth0

Visit this page after making the above cUrl call to verify that the configuration was added.

Alternatively, you can use netopeer-cli (which is a NETCONF client) to verify the same data as follows:

$ sudo docker attach router1
# netopeer-cli
 netconf> connect localhost (Passwd: root)
 netconf> get-config running
   Result:
   <interfaces xmlns="urn:sdnhub:odl:tutorial:router">
     <id>eth0</id>
     <ip-address>10.10.1.1/24</ip-address>
   </interfaces>
 netconf>

As you can see, our RESTconf operation pushed configuration from the OpenDaylight controller to the data store of the device’s NETCONF server. The traditional control plane will, then, pass those configurations to the routing protocols within.

You can further experiment with this environment and push other configurations (like BGP and OSPF) for these two routers. Here is an example of how you can configure OSPF and BGP parameters (assuming all the interface IP are assigned prior to this):

$ curl -u admin:admin -v -X PUT -H "Content-Type: application/xml" -d '
  <router xmlns="urn:sdnhub:odl:tutorial:router">
    <ospf>
      <process-id>1</process-id>
      <networks>
        <subnet-ip>100.100.100.0/24</subnet-ip>
        <area-id>10</area-id>
      </networks>
    </ospf>
    <bgp>
      <as-number>1000</as-number>
      <router-id>10.10.1.1</router-id>
      <neighbors>
        <as-number>2000</as-number>
        <peer-ip>10.10.1.2</peer-ip>
      </neighbors>
    </bgp>
  </router>
' http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/router1/yang-ext:mount/router:router

Here is a postman collections file with a few REST calls in case you want to add or get configurations.

3. Developing a NETCONF application in Java

Within the tutorial project, we have a Java application that uses the OpenDaylight NETCONF connector to program configurations to the underlying device. This is to illustrate the type of automation possible within the JVM. This is also a way to logically centralize state pertaining all types of devices, both OpenFlow-based devices and legacy ones, allowing for backward compatibility.

To enable this application, you will have to install the appropriate feature in the Karaf console, prior to registering the NETCONF devices using the “register_netconf_device.sh” script.

opendaylight-user@root> feature:install sdnhub-tutorial-netconf-exercise
opendaylight-user@root> log:set DEBUG org.sdnhub

Now run “register_netconf_device.sh” in another terminal for router1. That in turn will print the following output in the console, confirming that configuration has been written to data store.

opendaylight-user@root> log:tail
2015-09-05 13:38:12,560 | INFO  | sing-executor-15 | NetconfDevice                    | 244 - org.opendaylight.controller.sal-netconf-connector - 1.2.0.Lithium | RemoteDevice{router1}: Netconf connector initialized successfully
2015-09-05 13:38:14,114 | DEBUG | pool-52-thread-1 | MyRouterOrchestrator             | 274 - org.sdnhub.odl.tutorial.netconf-exercise.impl - 1.0.0.SNAPSHOT | Configuring interface eth0 for router1
2015-09-05 13:38:14,417 | DEBUG | pool-52-thread-1 | GenericTransactionUtils          | 274 - org.sdnhub.odl.tutorial.netconf-exercise.impl - 1.0.0.SNAPSHOT | Transaction success for add of object Interfaces [_id=eth0, _ipAddress=10.10.1.1/24, _key=InterfacesKey [_id=eth0], augmentation=[]]
2015-09-05 13:38:14,417 | DEBUG | pool-52-thread-1 | MyRouterOrchestrator             | 274 - org.sdnhub.odl.tutorial.netconf-exercise.impl - 1.0.0.SNAPSHOT | Configuring interface eth1 for router1
2015-09-05 13:38:14,695 | DEBUG | pool-52-thread-1 | GenericTransactionUtils          | 274 - org.sdnhub.odl.tutorial.netconf-exercise.impl - 1.0.0.SNAPSHOT | Transaction success for add of object Interfaces [_id=eth1, _ipAddress=100.100.100.1/24, _key=InterfacesKey [_id=eth1], augmentation=[]]
...

The implementation for the netconf-exercise (i.e.,

netconf-exercise-impl
) has a helper class called
MyRouterOrchestrator
that inserts interface configurations when it discovers a device called
router1
or
router2
. Visit this page for router1 after registering the device to verify that the appropriate interface configurations were added.

The basic constructs necessary for working with a NETCONF device are as follows:

public class MyRouterOrchestrator implements BindingAwareProvider, AutoCloseable, DataChangeListener {
    private MountPointService mountService;
    ....

    @Override
    public void onSessionInitiated(ProviderContext session) {
        // Get the mount service provider
        this.mountService = session.getSALService(MountPointService.class);
    }

    //When new devices are discovered through a data change notification 
    @Override
    public void onDataChanged(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> change) {
        ....

        //Step 1: Get access to mount point specific to this netconf node.
        InstanceIdentifier<Node> netconfNodeIID = InstanceIdentifier.builder(NetworkTopology.class)
            .child(Topology.class, new TopologyKey(new TopologyId(TopologyNetconf.QNAME.getLocalName())))
            .child(Node.class, new NodeKey(new NodeId("router1")))
            .build();
        final Optional<MountPoint> netconfNodeOptional = mountService.getMountPoint(netconfNodeIID);
        
        //Step 2: Write config data to the mount point
        if (netconfNodeOptional.isPresent()) {
            
            //Step 2.1: access data broker within this mount point
            MountPoint netconfNode = netconfNodeOptional.get();
            DataBroker netconfNodeDataBroker = netconfNode.getService(DataBroker.class).get();

            //Step 2.2: construct data and the instance identifier for the config data
            InterfacesBuilder builder = new InterfacesBuilder()
                .setId(interfaceName)
                .setKey(new InterfacesKey("eth0"))
                .setIpAddress("10.10.10.1/24");
            InstanceIdentifier<Interfaces> interfacesIID = InstanceIdentifier
                 .builder(Interfaces.class, builder.getKey()).build();
            
            //Step 2.3: write to the config data store using SDN Hub special transaction utils.
            //The netconf connector sends the new config data to the device as an "edit-config" operation.
            GenericTransactionUtils.writeData(netconfNodeDataBroker, LogicalDatastoreType.CONFIGURATION, interfacesIID, builder.build(), true);
        }
    }
}

The above skeleton code skips several steps to just highlight the basic constructs for mounting device configuration and editing it. Any changes made to the MD-SAL data store is passed along to the device. Similarly, when the controller boots up, any existing configuration is copied over from the device.

4. Summary

  • This post illustrates how easy it is to configure legacy routers over NETCONF, as long as you have the YANG model for that device. This can be extremely powerful as a global orchestrator that nicely integrates OpenFlow and traditional control planes.
  • Having said that, to manage different types of devices, one needs to build YANG-specific plugins or adapters to appropriately convert from high-level intent to device-specific configurations.
  • This device-specific adaptation can be avoided if device configurations share a common model, like OpenConfig, for the common denominator of configurations for a protocol.