We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FIWARE.OpenSpecification.I2ND.Netfloc - FIWARE Forge Wiki


From FIWARE Forge Wiki

Jump to: navigation, search
Name FIWARE.OpenSpecification.I2ND.Netfloc
Chapter I2ND,
Catalogue-Link to Implementation [ Netfloc]
Owner ZHAW,



Within this document you find a self-contained open specification of a FIWARE generic enabler, please consult as well the FIWARE Product Vision, the website on http://www.fiware.org and similar pages in order to understand the complete context of the FIWARE project.


Copyright © 2015 by ZHAW

Legal Notice

Please check the following Legal Notice to understand the rights to use these specifications.


Current cloud networks are not optimal in terms of resource usage, reliability, deployment and maintenance because the underlying technologies and protocols where not designed considering modern cloud architectures. Netfloc GE will enable the development of cloud native networking extensions, apps and technologies that manage, monitor and analyze networks. More specifically, Netfloc enables the cloud networking developers to design applications with value added services (e.g. QoS, resilience, load-balancing, etc.) in cloud datacenter networks.

The present implementation of the Open Stack Neutron service uses OpenDaylight for managing the network via Modular Layer 2 (ML2) north bound plugin. OVSDB plugin in OpenDaylight on the other hand, is responsible for implementing the Open vSwitch (OVS) database as a southbound interface towards configuration of OVS. GRE tunnels encapsulate isolated layer 2 network traffic in IP packets that are routed between compute and networking nodes using the hosts' network connectivity and routing tables. The Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing openvswitch, linuxbridge, and hyperv L2 agents, and is intended to replace and deprecate the monolithic plugins associated with those L2 agents. The ml2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin.

The OpenStack Neutron plugin uses both GRE and V(X)LAN overlay networks. V(X)LAN network provide tenant separation by allocating a V(X)LAN tags for each tenant. The GRE segmentations provides a tenant segregation via tunnels what encapsulate each tenant traffic. For example if a tenant VM is running on compute host Host1 and Host2, OVS and Neutron will create bridge on each of the hosts that direct the traffic from the VMs inside the tunnels.

One of the evident issues in such kind of setup is the huge traffic throughput that is created due to the protocol encapsulation overhead. This can impact the overall performance in a scenario of big multi-tenant networks. For illustration, a packet travel between two VM in different hosts traverses four distinct type of virtual networking devices (TAP devices, veth pairs, Linux bridges, and OVS bridges) thereby encapsulating nine interfaces (TAP vnet0, Linux bridge qbrNNN, veth pair (qvbNNN, qvoNNN) Open vSwitch bridge br-int, veth pair (int-br-eth1, phy-br-eth1), physical network interface card eth1). This clearly downgrades the performance of the entire datacenter network where resource consuming applications and network functions are run. Moreover, the different tunneling mechanisms used in OpenStack in order to provide isolation and multi-tenancy support, prevent the network application developers from transparently deploying their application.

The Netfloc GE will be provided as a toolkit and set of API function calls from the Netfloc GE to the SDN component. The idea of the Netfloc GE is to enable unbundling of hardware / software and optimize the GRE/VLAN functionality used for traffic isolation in different tenants, by providing only one tunneling mechanism. This puts the SDN controller in exclusive charge of the existing tenants avoiding the conventional Open Stack Neutron tunneling.

Target Usage

The Netfloc GE will build on state of the art technology to provide optimal management of network flows, with the possibility to test, monitor and perform statistical analysis to a set of common networking functions included in the OpenFlow protocol. The above functionality will be exposed through a northbound API via REST based interface offered to either FIWARE GEs or third party developers. The direct consumers of the Netfloc GEi are the network native application developers used to work with a complex network setup over bare metal infrastructures. The idea is to shift their perspective from network configuration into network programmability in a straightforward manner by providing a high-level framework on top of ODL controller to deal with smart virtual datacenter management in OpenStack deployments. Furthermore a potential consumers of the Netfloc GEi will be targeted among the content delivery network companies, like Akamai for example, IPTV and streaming service providers.

Basic Concepts


The Netfloc component will be tightly coupled to the following technologies:

  • An SDN enabled, managed network which implements the Openflow protocol
  • The OpenDaylight SDN controller
  • An Openstack setup

Netfloc Components

Netfloc will provide a set of libraries and tools organized in several modules. Overall the Netfloc functionality will be divided in three logic parts: Network Management, Network Control, and Analysis tools as depicted in the Figure Netfloc components.

Network Management

  • Flow management – this component will undertake detailed analysis of the desired functionality by the network application developers and map it to a specific set of function calls within the ODL controller.
  • Topology graph – component in charge of storing up-to date representation of the virtual network topology for each of the instantiated network functions. The topology graph will give context to the Analysis tools.
  • Database – managing history logs related to network topology and flows.

Network Control

  • Packet handling – Netfloc control module that will deal packet handling based on rules specified within the virtual network function and will have a direct interface with the ODL native packet handling component.

Network Analysis

  • Statistics – this component will be in charge of providing a tool to analyze the results and provide a graphical representation of the network flow traffic.
  • Testing – this module will provide debug and test environment (for ex, using Mininet) to facilitate the development process and validate applications.
  • Monitoring - monitoring and handling of the instantiated network functions.

Generic Architecture

Netfloc Components
Netfloc Components

Main Interactions

The following interfaces will be considered within the Netfloc GEi implementation:

Netfloc - OpenStack Interface This interface will use RESTful approach to offer direct interaction between the OpenStack controller component and the network management libraries within Netfloc. The OpenStack controller provides information about topology changes and the network interfaces of host nodes in the managed network. Currently there is an one-to-one API mapping between the Neutron and ODL on the northbound side. Netfloc libraries will be deployed as an ODL bundle to act as a middleware between Neutron and OpenDaylight. In this case particular Neutron API calls can be intercepted by Netfloc, that will then make some specific changes, before calling the Neutron APIs on the OpenDalylight side. This may require top-down adaptations triggering the OVSDB specific libraries, depending on the defined functionality to be supported by the Netfloc APIs. Initially the idea is to be mainly Neutron compliant, however on a longer term planning the Netfloc APIs should be able to support any networking APIs from other cloud providers (ex. CloudStack). A potential contribution to the ODL community can be considered too.

Netfloc – FIWARE GEs

  • Netfloc GE - Cloud Hosting GE interface: Through this interface the Cloud Hosting chapter can create private networks and manage the datacenter infrastructure.
  • Netfloc GE - ROS GE interface: ROS GE can use the Netfloc GE interface in order to manage the networking between VMs running robot clones and robots in private datacenter networks.

Netfloc GE – third party developers Netfloc will provide an abstraction layer between the ODL northbound API (nAPI) and the Cloud-SDN application developers.

Basic Design Principles

Netfloc will take into account various general requirements that affect design and implementation issues. Apart from the generic guidelines, which are vital for successful design and implementation of any SDK-like component, Netfloc will initially include some specific requirements, some of them including: the support of OpenDaylight Helium version of an SDN controller, Java language based, open source availability, OpenFlow support, and support of Neutron ML2 plugin nAPIs. The implementation phase will take a top-down, prototype based approach in the process of identifying the requirements and gathering a common set of functionalities to be exposed. The process will include the design of applications such as: isolation support of VM instances based on novel non-VxLAN and non-GRE standards, or resilient applications for direct control of the physical switch topology. The empirical results gathered from the validation and the comparative analysis of those applications with respect to the current solutions present in SDN will determine the objects and functionality need to be defined as code and trigger the software design phase of the key libraries and components in the Netfloc GE.

The figure depicts the basic design approach of the Netfloc GE. App0, App1, ..etc represent the potential applications of interest to validate the problem statement and verify the GE functionality. After performance analysis of the output from those applications (resilience, isolation, service function chaining, etc.), a common set of libraries will be designed to facilitate the creation of optimized network applications. Finally a network programmer will be able to take over the network and fully configure the physical switch, or even engage on-demand more physical switches in the topology in order to enable deployment of resilient applications.

Netfloc prototype applications and interfaces
Netfloc prototype applications and interfaces


The strong community behind it has brought the OpenDaylight (ODL) controller to a mature level that has urged to approach the SDN perspective from a different point of view, i.e. within a datacenter infrastructure deployment. OpenStack has shown to be a pioneer technology to directly support this tendency by designing ML2 plugin to permit a direct communication between Neutron and ODL. This has explicitly involved the SDN technology in the management of complex cloud datacenter networks. Such approach has generated issues coming from the direct interaction between the physical and virtual network resources. Moreover additional overhead has been introduced in mapping the network traffic between the physical hosts and the virtual tenant networks. Addressing those issues and identifying new potential problems on this level is a hot topic and dealing with this is a must in the current SDN world. The Netfloc GE will focus on providing library and expose the corresponding APIs for flow control and topology management with a strong focus on more transparent and optimized multi-tenancy traffic. This will help network application developers address the challenges of the current Cloud-SDN trend by providing a component for management of their cloud based network resources in a uniform manner.

Detailed Specifications

Re-utilised Technologies/Specifications

Netfloc APIs are based on the RESTful Design Principles. The related technologies and specifications are:

  • RESTful web services;
  • HTTP/1.1;
  • JSON data serialization format.

The Netfloc implementation is based on OpenStack Neutron and OpenDaylight design principles and exploits the capabilities of OpenFlow.

  • OpenStack Neutron Modular Layer 2 (ml2) plugin is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. It currently works with the existing openvswitch, linuxbridge, and hyperv L2 agents, and is intended to replace and deprecate the monolithic plugins associated with those L2 agents. The ml2 framework is also intended to greatly simplify adding support for new L2 networking technologies, requiring much less initial and ongoing effort than would be required to add a new monolithic core plugin. A modular agent may be developed as a follow-on effort.
  • OpenDaylight (ODL) is a highly available, modular, extensible, scalable and multi-protocol controller infrastructure built for SDN deployments on modern heterogeneous multi-vendor networks. It provides a model-driven service abstraction platform that allows users to write apps that easily work across a wide variety of hardware and southbound protocols. OpenDaylight is hosting one of the biggest growing community for network programability and NFV support that has gone beyond being just being SDN controller. It supports variety of networking projects, standards and protocols. Netflow GE is aligned with the following projects: ovsdb and openflowplugin.
  • OpenFlow is an open interface for remotely controlling the forwarding tables in network switches, routers, and access points. Based on this low-level interface researchers or other users can design, build and test custom networks and algorithms with innovative high level properties. For example OpenFlow enables development and testing of algorithms for energy-efficient networks, optimized resource management, new wide-area networks, etc.
  • Specifications and other informative documents such as a White Paper can be found here

Terms and definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will be help to carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP). For a summary of terms and definitions managed at overall FIWARE level, please refer to FIWARE Global Terms and Definitions

Personal tools
Create a book