We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FIWARE.OpenSpecification.I2ND.NetIC R3 - FIWARE Forge Wiki

FIWARE.OpenSpecification.I2ND.NetIC R3

From FIWARE Forge Wiki

Jump to: navigation, search
Name FIWARE.OpenSpecification.I2ND.NetIC
Chapter I2ND,
Catalogue-Link to Implementation [OFNIC ]
Owner ALCATEL-LUCENT DEUTSCHLAND AG, ALCATEL-LUCENT ITALIA S.P.A., NOKIA SIEMENS NETWORKS TELEKOMMUNIKACIOS KERESKEDELMI ES SZOLGALTATO and UNIVERSITA' DEGLI STUDI DI ROMA "LA SAPIENZA",

Contents

Preface

Within this document you find a self-contained open specification of a FIWARE generic enabler, please consult as well the FIWARE Product Vision, the website on http://www.fiware.org and similar pages in order to understand the complete context of the FIWARE platform.

Copyright

Legal Notice

Please check the following Legal Notice to understand the rights to use these specifications.

Overview

Network Information & Control (NetIC) is intended to provide abstract access to heterogeneous open networking devices. It exposes network status information and it enables a certain level of programmability within the network (depending on the type of network and the applicable control interface). This programmability may also enable network virtualization, i.e., the abstraction of the physical network resources as well as their control by a virtual network provider.

Potential users of NetIC interfaces include network service providers or other components of FI-WARE, such as cloud hosting. Network operators, virtual network operators and service providers may access (within the constraints defined by their contracts with the open network infrastructure owners) the open networks to both retrieve information and statistics (e.g. about network utilization) and also to set control policies and optimally exploit the network capabilities.

General Note

In order to avoid too verbose text, in this description we typically use the term "NetIC GE" or simply "NetIC" to refer to "an implementation of the NetIC GE Open Specifications" (e.g., "an implementation of the NetIC GE Open Specifications"). Note that the notion of GE is abstract and actually can refer to one of the following:

  • "GE Open Specifications" which contain all information required in order to build components which can work as implementations of GEs.
  • a "GE Implementation" which refers to components in a given product that implement a given GE Open Specification and therefore may claim that they are "compliant with the GE Open Specifications".

You may refer to the set of terms and definitions provided here.

Target Usage

The Network Information and Control (NetIC) Generic Enabler provides to FI-WARE chapters as well as usage area applications and services the means to optimally exploit the network capabilities via a dedicated interface and API. NetIC exposes related network state information to the user of the interface as well as offers a defined level of control and management of the network.

The beneficiaries of the interface include content providers, cloud hosting providers, context providers/brokers, and (virtual) network providers/operators, all of whom may need to understand and manipulate the network between them and their clients. They might want to set up flows/virtual networks to their clients and they may want to control such flows/virtual networks in order to respect pre-defined Service Level Agreements (SLAs), for example in terms of provided Quality of Service (QoS). There are several use cases for the NetIC Generic Enabler, for example the following:

  • A cloud hosting provider has a couple of data centre locations. In order to distribute the allocation of virtual machines (VM) and applications to the various locations, the cloud hosting provider should know about the characteristics of the paths between the locations (e.g., delay, available capacity). To get this information, the cloud hosting provider can request from the network provider (regularly or per scheduled event) the characteristics of the paths between the data centers of the cloud hosting provider. The requested information will be provided via the NetIC interface. In addition, when dealing with migration of virtual machines and applications across data centers, the cloud hosting provider may request a temporary virtual private connection to be setup with a certain quality of service being guaranteed during the time of migration.
  • To deliver a service to a client, a service provider may need a certain minimum link quality, e.g., for a high-definition live video streaming service. If the client is willing to pay for this, the service provider will request via NetIC from the network provider the setup of a virtual connection with certain quality characteristics between the server and the client. NetIC will do so if capacity is available. Note that to improve the quality of experience and to ensure that capacity is available when needed, a NetIC implementation may support Service Level Agreements (SLA) in providing connectivity. Those agreements identify the client of the NetIC instantiation (e.g. the video streaming service provider) and guarantee that capacity will be available (with parameters described in the specific SLA) for that client on demand (the connectivity request must refer to the SLA). The network provider may offer the capacities being ‘unused but reserved by SLA’ to other clients as a best effort connectivity.
  • A network service provider wants to implement new business models based on the "pay-as-you-go" paradigm, setting up a specialized service for a group of clients. The specialized service is built orchestrating the network resources dynamically. A virtual network (optical or packet based) is required that connects servers, network elements and the involved clients, potentially running customized protocols. The service provider can request via NetIC from the network provider a virtual network between the involved endpoints, possibly also with some specified constraints (quality characteristics, isolation against other virtual networks, energy efficiency metrics) defined.
  • A service provider wants to set up a specialized service for a group of clients. For this they need a virtual network connecting some servers and the involved clients, potentially running customized protocols. The service provider can request via NetIC from the network provider a virtual network to be setup between the involved endpoints, possibly also with some specified quality characteristics and isolation against other virtual networks.
  • A cellular service provider wants to run its business on top of a virtual network which is able to “breathe” (to be re-configured as demand changes) since loads during idle and busy hours differ significantly. Benefits include reduced expenses (CAPEX is turned into pay-per-use OPEX), reduction of energy consumption and management flexibility. Today mobile traffic is typically mapped into static MPLS tunnels, and the infrastructure providing these tunnels is owned by the cellular service provider, too.

A fundamental challenge for the implementation of NetIC is that the network functionality is typically distributed over different elements potentially implemented internally in different ways (in multi-vendor environments). Also, the interfaces have to take into account the constraints of different providers (in multi-network service scenarios) as well as legal and regulatory requirements. These problems have been solved in the past by different standardized control plane solutions. This readily available functionality could be re-used by NetIC in order to provide a smooth evolution path rather than introduce a disruptive revolution. NetIC instances may be deployed by the different involved parties (e.g. virtual network providers/creators, and virtual network operators/users running a business on top). As a consequence, several instances of NetIC with different scopes may have to work together to deal with a request from e.g. a service provider or an application. Each might cover a different part of the network, for instance in the horizontal direction (i.e., type of access, there might be TV cable networks, (V)DSL networks or even local radio networks) or in the vertical direction (i.e., ownership or virtual network structures, there might be several local networks which are integrated by an country-wide service provider).

As already outlined in the I2ND architecture overview, the communication between a service provider and a NetIC GE instantiation will run via or is supervised by an S3C GE instantiation (operated by a network provider or even a network infrastructure provider). For, e.g., security- or accounting relevant actions, the S3C GE instantiation will perform additional cross-checks before forwarding the action to NetIC, furthermore it will also care for accounting and billing issues in the context of applicable Service Level Agreements (SLA).

It should be noted that the capabilities a specific NetIC implementation can offer depend on the capabilities of the underlying network.

Basic Concepts

Northbound Interface

The northbound interface is intended to expose network status information and to enable a certain level of programmability within the network (depending on the type of network and the applicable control interface). This programmability may also enable network virtualization, i. e., the abstraction of the physical network resources as well as their control by a virtual network provider. Depending on the purpose (e.g. only information provision or control of network) the interface will have different characteristics.

It should be noted that the exposition of specific capabilities via the northbound interface depends on the capabilities and the technology of the underlying network. The following sections describe in more detail which capabilities can be exploited with the different function blocks. More details can be found also in section 'Main Interactions' below.

NetIC Architecture

The block diagram below shows the main functional modules of NetIC. It should be noted that the presence of a given module in a specific NetIC instantiation depends on the network being controlled by this instantiation.

NetIC GE Functional Block Diagram

The following sections give a brief overview on the functional modules and their interfaces.

NetIC API

The NetIC API is the conceptual north-bound interface of the NetIC GE. It exposes the internal module interfaces to the outside world such that applications, or other GEs, can access the modules that are present in the specific NetIC instantiation.

Network Element Virtualizer

The Network Element Virtualizer allows integrating a generic network element into NetIC GE. By mean of NetIC API physical resources of a given network element are exposed in terms of fully manageable network resources, such as links and paths, to be used by other applications and generic enablers. The actual management of the instantiated network element is relaying on its own management interface being a text-based one as for example for the case of TL1, XML and CLI or a binary one as for SNMP protocol.

NetIC API Handler

The NetIC API handler provides a RESTful web service interface to NetIC for communicating with the management system interface of a given network element. Tasks are including:

  • managing communication throughout the NetIC interface
  • implementing the basic primitives of the NetIC interface (synchronization, provisioning, monitoring, restore)
  • forwarding management requests received at NetIC interface to the network element Command Processor
  • receiving messages and notifies from the network element Event Processor to be sent via the NetIC interface

Network Element Management System

Command Processor

The Command Processor is required to implement management requests over logical network resources at NetIC interface. On purpose it has to

  • establish and maintain a management session with the actual instance of a network element including all aspects related to secure and logged access to network element
  • translate a management request into a device specific management session that in general may include more than one device specific command. As a matter of fact a single management operation at NetIC interface may consist of more than one SNMP and/or TL1 commands at network element interface
  • take care of partially implemented management requests and in particular to roll-back policies in case the actual instance of the network element is not able to complete the requested management session
  • provide feedback at NetIC interface compiling reports for fulfilled management requests or denial details for refused ones
Event Processor

The Event Processor is responsible to notify at NetIC interface events related to network resources exposed by the Network Element Virtualizer. These events have to be synthesized by listening at the given network element instance by

  • establishing and maintaining a management session with the actual instance of the network element including all aspects related to secure and logged access to the actual network element
  • listening at TL1 events and SNMP traps from the actual network element
  • mapping a device specific event to a network resource event
  • propagate a network resource event throughout NetIC interface

Topology Information Module

The Topology Information Module provides abstract information about

  • nodes in a network
  • address ranges associated with nodes
  • communication costs between nodes

The Topology Information Module is provided as a set of function calls (C library) which will provide easy-to-use access to network information at the level of detail useful to applications (e. g., for application-level load balancing). Offering a library facilitates the integration of NetIC functionality in applications (e. g., FI-WARE Generic Enablers or applications, Use Case Project software). The initial implementation of the library is developed for Linux/Unix-based applications.

The present instantiation of the Topology Information Module assumes the presence of an ALTO server being able to provide information about the network of interest.

API Handler

The API Handler is the frontend of the Topology Information Module towards applications. It validates the incoming information requests, assigns the outgoing information elements and initiates the appropriate actions in the Data Processor.

Data Processor

The Data Processor analyzes the incoming requests and aligns them with the capabilities of the attached ALTO server. Then it requests the appropriate information elements from the ALTO server and aggregates the received information in an appropriate way to provide the required information.

ALTO Protocol Handler

The ALTO Protocol Handler terminates the ALTO protocol used for the communication with the ALTO server of the network of interest. The development of this functional sub block benefits from an ALTO client design provided by the EU FP7 project MEDIEVAL.

Virtual Network Provider (VNP)

NetIC handler

This functionality acts as a proxy for requests/responses exchanged between VNO(s) (Virtual Network Operators) and the VNP (Virtual Network Provider). The NetIC handler processes requests (and related responses) arriving via the open NetIC interface and forwards them to the appropriate internal function block:

  • on-demand connectivity requests are forwarded to the Virtual Network Controller,
  • requests related to scheduled connectivity requests are forwarded to the Scheduler, and
  • queries about network status and monitoring requests are forwarded to the Performance Management.

This functionality also processes notifications (and related responses) sent to VNO(s) via the open NetIC interface. It includes:

  • notifications from Topology function block about network errors (those affecting active connections and where VNP could not recover the error),
  • notifications regarding performance management.

Management interface handler

This functionality provides an interface (via a GUI or any proprietary management interface) to setup and manage the environment of the VNP.

For the controlled network it allows the physical topology to be defined, including:

  • the list of nodes the VNP controls (these are OpenFlow switches),
  • the list of external nodes to which the VNP provides connectivity (peers),
  • the list of links the VNP controls (these include both the internal and external links of the controlled network),
  • the parameters of the controlled nodes (capabilities, e.g. supported technology, physical links), and
  • the parameters of the external and internal links (e.g. bandwidth, granularity).

For the served virtual network operator(s) it allows the definition of:

  • the identity of the served Virtual Network Operator (ID, access point), and
  • the SLAs (e.g. provided Maximum Bit Rate (MBR), Guaranteed Bit Rate (GBR)).

SLAs

This is a storage place for Service Level Agreements (SLA) data. Usually, the stored information is the result of related negotiations with, e.g., Virtual Network Operators (VNO) being served by the Virtual Network Provider (VNOP).

Topology

This functionality is responsible for:

  • creating the virtual network representation(s). The virtual network representation is created based on the physical network representation and the SLA on a per VNO basis (the virtual network representation can be different for each VNO),
  • updating virtual network representation(s) (either by regularly checking for changes in physical topology and SLAs or after receiving a notification from management interface handler),
  • informing Virtual Network Controller on virtual network representation changes;
  • answering Virtual Network Controller queries on network topology and network representation.

Real (physical) network representation

This is a passive system that stores all parameters of the controlled network. The stored data is (over)written by management interface handler functionality and read by the topology functionality.

Virtual network representation

This is a passive system that stores virtual network representations shown to each VNO. It is managed by the topology functionality.

Performance management

This functionality manages network information related tasks. It is responsible for:

  • storing counters, triggers, (Key) Performance Indicators, event subscriptions received from VNO(s),
  • notifying VNO(s) about requested events, and
  • monitoring network state.
Counters

This is a storage place for counters, performance indicators and event subscriptions requested by each VNO.

Virtual Network Controller (VNC)

The virtual network controller acts as a centralized controller for the underlying physical network. The virtual network controller

  • receives on-demand connectivity setup / modification /removal requests from VNO(s) through the NetIC handler functionality and from the scheduler functionality,
  • changes network control rules according to the connectivity requests (note that "connectivity request" can mean the setup, modification, and removal of a connection),
  • sends the updated control rules to the underlying physical network through the physical infrastructure handler functionality,
  • updates the active connections and network state parameters if necessary after successful control rule change,
  • updates network state parameters according to the notifications received from physical network through the physical infrastructure handler functionality, and
  • attempts to change control rules to keep affected connections alive by changing connectivity paths upon error notification from the physical network

Network state parameters

This is a storage place for values / parameters provided by the physical network to the VNP (Virtual Network Provider). It is updated by the VNC (Virtual Network Controller) functionality and monitored by the performance management functionality.

Scheduler

The scheduler functionality receives scheduled connectivity requests from VNO and maintains the "scheduled connections" storage according to these requests, and whenever it is necessary to setup / change / remove a connection according to the stored "scheduled connections", it instructs VNC accordingly (same request sent as in case of on-demand connectivity requests).

Scheduled connections

This is a storage place for scheduled connections (endpoints, connection parameters, timing parameters).

Active connections

This is a storage place for active connections (endpoints, connection parameters, connection ID).

Physical infrastructure handler

This functionality acts as a proxy for requests/responses exchanged between VNC and the controlled physical network.

The possible instances for an open network are for the different versions of OpenFlow 1.0, 1.1, 1.2. (see https://www.opennetworking.org/sdn-resources/onf-specifications/openflow).

OpenFlow Network Module

The OpenFlow Network Module is an OpenFlow controller able to fulfill requests coming from the users of the NetIC API. In particular it is able to accomplish the following types of requests (using the mechanisms provided by the network frontend protocol):

  • Synchronize - used to retrieve available network resources.
  • Create - used to create a virtual resource based on physical or virtual network resources.
  • Destroy - allows the deletion of the virtual resources already created.
  • Monitor - provides information about current status and utilization of a given network resource.

API handler

The API handler has the role to map NetIC API commands into the OpenFlow Network Module. To this aim, it exposes a RESTful web service to the NetIC API side. NetIC commands received on the HTTP interface, are sent to specific sub-modules that effectively implement the required NetIC functionalities (synchronize, monitor, create, destroy).

Core

This module directly provides to OpenFlow Network Module components an API to exchange OpenFlow messages with the network. It also implements an event dispatching functionality to notify components of events raised by the network, or raised by other components. OpenFlow Network Module components find events to be a powerful way of communication, as it is possible to include a certain amount of information in each event.

Network Controller

This module fulfils requests of creation and deletion of virtual path resources. In particular, it is able to install virtual paths between any two nodes of the network with certain bandwidth requirements. To achieve routing functionalities, this sub-module needs to be aware of the network topology, and the load of the links as well. The topology information is accessed directly from the topology cache. The communication with the link load sub-module is performed by posting events in the core scheduler.

Topology Extractor

This module exposes synchronization functionality. The topology discovery sub-module is in charge of discovering network links and nodes. This sub-module periodically sends link layer discovery protocol (LLDP) messages in broadcast through the overall network. Whenever a new link is detected the core sub-module generates an event that is dispatched to topology discovery. Network entities without forwarding capabilities, (e.g. border hosts) might announce their presence in the network by periodically sending LLDP messages. On the basis of network nodes and links discovered, the topology is inferred and stored in a Topology Cache, which is basically a table updated dynamically.

Statistics Extractor

This module exposes monitoring functionalities. In particular it is able to estimate the level of link utilization of OpenFlow switches in the network. The link load sub-module relies on two protocols to acquire network links statistics: OpenFlow and simple network management protocol (SNMP). Link load queries all the network nodes, that have a SNMP agent deployed, to retrieve the SNMP variables(e.g. sent/transmitted bytes). These statistics are preferred over OpenFlow statistics, because they contain more information and are more accurate. The nodes without SNMP agent are queried by means of the functionalities provided by the core sub-module, which retrieves statistics counters of the OpenFlow switches. Information about all available nodes and the links interconnecting them is retrieved from the Topology Cache. The collected statistics are stored in a link load cache. They can be accessed by the API handler when monitoring requests are received from the NetIC API layer. The link load cache is also accessed by the routing sub-module, in order to guarantee the bandwidth of the requested paths.

SNMP

SNMP sub-module provides basic functionalities that permit to send and receive SNMP messages. These messages are exchanged between the OpenFlow Network Module and the network entities by means of the same communication channel that is used for OpenFlow messages, i.e. the Network Frontend.

Access Control

Access Control sub-module provides an access control list scheme in order to differentiate the privileges of the various users of the NetIC API. The fundamental concepts are Users, Roles, Resources, and Capabilities. The NetIC API commands are mapped to a list Resources and each Resource needs a set of Capabilities in order to be accessed. There is predefined (default) set of immutable Roles each of them with default a set of Capabilities. Additional mutable roles might be created, edited, or deleted by the 'Administrator Role' with custom sets of Capabilities. Each User of the NetIC API belongs to one or more Roles. In order to access a certain Resource a User should belong to at least one Role covering all the required Capabilities of the Resource. Capabilities of different Roles a User might belong to cannot be combined to access a Resource.

Access Control sub-module relies on a persistent storage system which might be a relational (MySQL or PostgreSQL) or non-relational (MongoDB) database.

Network Frontend

The Network Frontend is the network-specific and technology-specific representation of the network interfaces accessed by the functional modules.

Main Interactions

Users have two different ways to access the functions of NetIC, depending on the modules present in the selected NetIC instantiation.

  • A message-based interface (request/response, REST based) is provided by an information and control entity instance responsible for a particular network. This interface flavor will be used for detailed information gathering about network internals (like node/port status, link load, etc.) and control of network elements (e.g. activate or deactivate nodes/ports, setting up routing information). This interface is currently envisaged for NetIC implementations implementing the Virtual Network Provider, the Network Element Virtualizer or the OpenFlow Network Module.
  • A library-based interface offers access to coarse-grained network information, i.e. a subset of NetIC. Offering a library facilitates the integration of NetIC functionality in applications (e. g., FI-WARE Generic Enablers or applications, Use Case Project software). The library will focus on providing easy-to-use access to network information at the level of detail useful to applications (e. g., for application-level load optimization). This simplification of use also implies that not all functions of a NetIC instance can be accessed. The implementation of the library is initially done for Linux/Unix-based applications. This interface is currently envisaged for NetIC implementations implementing the Topology Information Module.

Depending on the authorization level, a customer is allowed to perform different actions. A proper authorization level might be acquired from the network operator by negotiating the NetIC GE usage terms by using the S3C GE. If required, the REST based message interface will use HTTP based security mechanisms to ensure secure communication and authentication between NetIC client and NetIC GE instantiation.

Message-based Interface

This interface is currently envisaged for NetIC implementations implementing the Virtual Network Provider, the Network Element Virtualizer or the OpenFlow Network Module. The Message-based Interface is well-suited when accessing a controller of a well-defined network. It can expose status information from the underlying network and it enables a certain level of programmability of the underlying network. The interface is usually not directly used by applications serving end user needs but by applications acting as virtual network operators. As a consequence, only a quite limited number of users will interface to a given network at the same time. The interface permits the utilization of the network resources as services. The network physical resources are abstracted into Uniform Resource Identifiers, according to a defined hierarchy. Moreover the operations exposed by the interface permit the manipulation of such resources with the common commands defined by the RESTful paradigm based on HTTP such as GET, POST, DELETE, and PUT. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server does not send its database, but rather JSON formatted records of information. Each request sent to the Message-based Interface is self-descriptive, i.e. it contains all the information needed to process the message. The main functionalities offered by the Message-based Interface are the following:

  • Synchronization: It is used to retrieve the available network resources and also information about their configuration. The users of the interface can synchronize both physical and virtual resources. Typical HTTP verb used is GET(URI).
  • Provisioning: Allows the configuration of network resources. The configuration might comprise physical or virtual resources of the network. The corresponding HTTP verb is POST(URI,Metadata).
  • Monitoring: Allows the retrieval of monitoring information collected from the network regarding failures and performance statistics. The requested information might be gathered in near real time from the network, stored in a cache inside the controller and provided at the instant of the request. Typical HTTP verb used is GET(URI).
  • Release: Allows the release of already configured network resources, bringing them back to their default configuration. The HTTP verb utilized in these type of commands is POST(URI).
  • Create: allows creation of virtual network resources based upon existing physical or virtual resources. Generally the URI of the new resources is decided by the server, so the HTTP verb utilized is POST(URI,Metadata). The HTTP response, upon successful creation, shall contain the URI of the created virtual resource. The virtual resources created might be further manipulated and configured with provision commands. Typical virtual resources might be new virtual paths or monitoring tasks.
  • Destroy: allows destruction of virtual network resources. The HTTP verb adopted for these group of commands is DELETE(URI).

Applying the REST paradigm (which uses a stateless representational format) it is very difficult to implement callback events in a natural way. As a consequence, in this release the NetIC instantiations handle events via information polling. This fact does not limit the functionalities of the NetIC GE, it only shifts the callback difficulty to the client application. Other alternatives like ATOM Publishing Protocol (http://www.ietf.org/rfc/rfc5023.txt) or RSS (http://www.rssboard.org, http://www.rssboard.org/rss-specification) might be for further study in later releases.

For the current release of FIWARE, NetIC will not provide a javascript library to ease potential users to access the API. Instead, certain NetIC GE instantiations (Openflow Network Module) will offer a web based GUI developed in javascript to enable users to ‘browse’ the web services offered by the related NetIC GE instantiation.

Network Information

The commands sent through the message-based interface can trigger actions that need only processing inside the NetIC sub-modules if the information requested is available locally. The diagram below depicts a sequence of messages in case of a request that needs only local information to be processed. This type of request generally belong synchronize messages about available network resources and also monitoring commands about network statistics, which are typically stored in local caches and updated periodically. By doing so, the commands that require reading resources are processed rapidly by the NetIC instance and the responses are sent faster.

Network Information Processing Flow Diagram

Network Control

Network control functionalities trigger functions that require further exchange of messages with the network managed. The next diagram depicts a sequence of messages that involves not only NetIC local sub-modules but also messages exchange with network entities. From the diagram one can note that the network entities involved might be one or several. These interactions are typically generated by provision, release, create, and destroy commands. Depending on the number of entities involved, the rate of the communication channel with network entities and the extension of the network, these type of commands can be time consuming.

Network Control Processing Flow Diagram

Library-based Interface

This interface is currently envisaged for NetIC implementations implementing the Topology Information Module. The Library-based Interface is well-suited when accessing more coarse-grained network information. In this case it might have to be evaluated first from which source the requested information should be retrieved best, the selected source might even redirect the request again. As the available information might be well-suited for the needs of a wide range of applications, a single installation of a Generic Enabler might easily create a bottleneck, even if the requested information is retrieved in the end from a wide variety of sources. This bottleneck can easily be avoided if each interested application integrates the appropriate function calls which in turn flexibly can get their information from the appropriate information sources.

Needed Parameters

The library-based interface hides most of the functionality of the network data retrieval and processing from the application. By this, the information which is required to be provided by the calling application to the library functions is limited to its absolute minimum.

Currently, the following parameters are needed:

  • The URL of the service directory of the ALTO server, which is intended to provide the network information.
  • The IP addresses of the potential data source servers and data destination servers.

It should be noted that if one or more of the involved servers are located in a private network (with private IP addresses) and the ALTO server is in the public IP network, the related public IP address(es) of the related intermediate NAT server(s) must be given to ensure proper results.

Supported Functionalities

Before using the Library-based Interface, it has to be initialized. Correspondingly, a 'close' functionality is provided to release allocated resources. Any function call to the Library-based Interface except 'initialization' may only be issued after a successful 'initialization' function call. No function call to the Library-based Interface except 'initialization' may be issued after a successful 'close' function call.

  • Initialization: During this process the URL of the given ALTO server is checked, a connection is set-up to the ALTO server and the ALTO service directory is retrieved. The service directory is checked for services suitable to support the calculation of network connection costs.
  • Close: All resources allocated during the initialization process are released.

Currently, the Library-based Interface supports the following functionalities:

  • Get best source: For a given set of source IP addresses and a given destination IP address (usually, this would be the IP address of the server the calling application is running on), the best suited address out of the given set of source IP addresses is evaluated and returned to the calling application.
  • Get best sink: For a given set of destination IP addresses and a given source IP address (usually, this would be the IP address of the server the calling application is running on), the best suited address out of the given set of destination IP addresses is evaluated and returned to the calling application.
  • Get abstract location: For a given IP address the corresponding abstract location, which is a unique name of a group of addresses, is returned.
  • Compare abstract location: For two given IP addresses the corresponding abstract locations are compared.

Error Reporting

During operation of a library function, errors may occur. They can be categorized in four classes:

  1. Application errors: The calling application provides wrong information, as a non-valid ALTO server URL or an invalid server IP address.
  2. ALTO protocol errors: The ALTO server may respond to a request with non-readable answers or annotates services in the service directory which cannot be invoked.
  3. HTTP Errors: A HTTP connection (which is used to carry the ALTO protocol) cannot be established or times out.
  4. General processing errors: This class summarizes other errors, like e.g. memory allocation failures due to a lack of sufficient resources of the computer the application is running on.

To report the error to the calling application, all API functions return an error structure element.

Basic Design Principles

Rationale

The NetIC Generic Enabler will provide access to network status information to its users. Interfaces available today are already able to provide specific information, but the interface highly depends on the specific network technology. The aim of NetIC is to define a set of general functions to access network status information in a technology independent way, overcoming the heterogeneity of today’s solutions.

As the NetIC API is not intended to be directly accessed by end users (for obvious security reasons, it is very unlikely that network infrastructure providers will allow end users direct access to even abstracted network resources) but for access by virtual or real network operators or service providers, WebRTC is not regarded as being directly relevant for NetIC API. Instances using WebRTC will most probably interface with related gateway functions (operated e.g. by a network operator or a service provider) which in turn might access the NetIC API to allocate a certain QoS. Such gateway functions might be part of e.g. S3C GE instantiations.

Implementation agnostic

There are several standard technology and implementation dependent interfaces to control and manage specific networks. To overcome this heterogeneity, the objective of the NetIC Generic Enabler is to provide a generic interface to control and manage open networks. The interface shall be technology and implementation independent.

Detailed Specifications

Following is a list of Open Specifications linked to this Generic Enabler. Specifications labeled as "PRELIMINARY" are considered stable but subject to minor changes derived from lessons learned during final interactions of the development of a first reference implementation planned for the current Major Release of FI-WARE. Specifications labeled as "DRAFT" are planned for future Major Releases of FI-WARE but they are provided for the sake of future users.

Open API Specifications

As already outlined in the section Main Interactions, there are two different flavors of the NetIC API and consequently two different Open API specifications.

Other Relevant Specifications

There are no relevant external specifications.

Re-utilised Technologies/Specifications

The message-based interface of the NetIC GE is based on RESTful Design Principles. The related technologies and specifications are:

  • RESTful web services
  • HTTP/1.1
  • JSON and XML data serialization formats

Some NetIC implementations exploit the capabilities of OpenFlow:

  • OpenFlow is an open interface for remotely controlling the forwarding tables in network switches, routers, and access points. Based on this low-level interface researchers or other users can design, build and test custom networks and algorithms with innovative high level properties. For example OpenFlow enables development and testing of algorithms for energy-efficient networks, optimized resource management, new wide-area networks, etc.
  • Specifications and other informative documents such as a White Paper can be found here

Some NetIC implementations make use of the TL1 interface:

  • The TL1 interface is a widely used management interface in telecommunications. Depending on their underlying network, some NetIC implementations use this interface to pass messages between the NetIC Generic Enabler and the Network Elements (NEs)of the underlying network.
  • Operations domains such as surveillance, memory administration, access and testing define and use TL1 messages to accomplish specific functions between the GE and the NE.
  • TL1 is defined in Telcordia Technologies (formerly Bellcore) Generic Requirements document GR-831-CORE which can be found here

Some NetIC implementations make use of the ALTO interface:

  • The 'Application Layer Traffic Optimization (ALTO)' IETF Working Group defines an interface through which an application can request guidance from the network, e.g. which can be used for service location or placement
  • The ALTO protocol is intended to provide applications with information to enable them for a guided choice among several application endpoints in a network. ALTO enables fixed and mobile service providers to inform application clients about endpoints costs, in terms of e.g. routing costs or hop count. Selection of application endpoints is therefore enhanced with respect to traditional systems such as Geo-DNS.
  • The ALTO working group does not define the mechanisms used for deriving network topology/infrastructure information or preference. This task is left to other already existing or still to be created services.
  • Specifications and other informative documents can be found here

Terms and Definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will help carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP). For a summary of terms and definitions managed at an overall FI-WARE level, please refer to FIWARE Global Terms and Definitions.

  • Connected Devices: A connected or smart device can be an advanced device located at home, such as a set top box and multimedia device (including advanced TVs), PCs, storage (NAS like), indoor handset (home/advanced DECT), or game consoles. Furthermore, mobile devices, such as mobile/smart phones (GSM/3-4G), tablets, netbooks, on-board units, (in-car devices) or information kiosks are connected devices, too. It is very likely that new devices will appear and fall into this “smart devices” category during the project execution (femto cells, etc.).
  • Cloud Proxy: A device encompassing broadband connectivity, local connectivity, routing and networking functionalities as well as service enabling functionalities supported by a modular software execution environment (virtual machines, advanced middleware). The “Cloud Proxy” or “Home Hub” is powerful enough to run local applications (for example home automation related tasks such as heating control or content related ones such as Peer to Peer (P2P) or content backup). It will also generally include local storage and may be an enabler for controlling privacy as some content or data could be stored locally and could be controlled only by the user without having the risk of seeing his/her data controlled by third parties under consideration of the overall security architecture.
  • Open Networking: Open networking is a concept that enables network nodes to provide intelligent network connectivity by dynamic configuration via open interfaces. Examples for provided features are the fulfillment of bandwidth or quality requirements, seamless mobility, or highly efficient data transport optimised for the application (e. g., with minimum network resource or energy consumption).
  • Network Service: Network Service is a control and policy layer/stratum within the network architecture of a service provider. The Network Service provides access to capabilities of the telecommunication network, accessed through open and secure Application Programming Interfaces (APIs) and other interfaces/sub-layers. The Network Service concept aims at providing stratum that serves value-added services and applications at a higher application and service layer and exploits features of the underlying transport and technology layer (e. g. via NetIC interfaces).
  • OpenFlow: open interface for remote controlling network nodes with switching capabilities.
  • Nox: OpenFlow network controller- NOX is an open source project developed in C++ and Python. Specifically, it’s a platform for building network control applications for Openflow networks. NOX was initially developed at Nicira Networks side-by-side with OpenFlow. Nicira donated NOX to the research community in 2008, and since then, it has been the basis for many and various research projects in the early exploration of the SDN space.
  • TL1: Transaction Language 1 - this is a machine-to-machine protocol defined by Telcordia (GR-831-CORE), which is used by some NetIC implementations to interface to switches in the underlying network.
  • NEV: Network Element Virtualizer - this is a glue technology for optical network elements which is used by some NetIC implementations.
Personal tools
Create a book