We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FIWARE.OpenSpecification.Security.SecurityMonitoring - FIWARE Forge Wiki


From FIWARE Forge Wiki

Jump to: navigation, search
Name FIWARE.OpenSpecification.Security.Security_Monitoring
Chapter Security,
Catalogue-Link to Implementation Security Monitoring
Owner Thales, Olivier Bettan



Within this document you find a self-contained open specification of a FIWARE generic enabler, please consult as well the FIWARE Product Vision, the website on http://www.fiware.org and similar pages in order to understand the complete context of the FIWARE platform.


Copyright © 2012-2015 by THALES

Legal Notice

Please check the following Legal Notice to understand the rights to use these specifications.


The Security Monitoring GE is part of the overall Security Management System in FI-WARE and as such is part of each and every FI-WARE instance. The target users are: FI-WARE Instance Providers and FI-WARE Application/Service Providers.

Security monitoring is the first step towards understanding the real security state of a future internet environment and, hence, towards realizing the execution of services with desired security behaviour and detection of potential attacks or non-authorized usage.

This generic enabler deals with the security monitoring and beyond, up to pro-active cyber-security i.e. protection of “assets” at large. It allows to assess the real security state of a future internet environment and, hence, towards realizing the execution of services with desired security behaviour and detection of potential attacks or non-authorized usage.

The main concerns of Security Monitoring are:

1. Detect vulnerabilities and identify risks
2. Score vulnerabilities impact and assess risks
3. Analyze events to correlate and detect threats and attacks
4. Treat risks and propose counter-measures
5. Visualize result alarms and residual risks in order to allow efficient monitoring from the security perspective.

Basic Concepts

MulVAL Attack Paths Engine

To determine the security impact that software vulnerabilities have on a particular network, one must consider interactions among multiple network elements. For a vulnerability analysis tool to be useful in practice, the model used in the analysis must be able to automatically integrate formal vulnerability specifications from heterogeneous vulnerability sources.

The MulVAL Attack Paths Engine is an end-to-end framework and reasoning system that conducts multihost, multistage vulnerability analysis on a network. The MulVAL Attack Paths Engine adopts Datalog (a query and rule language for deductive databases) as the modeling language for the elements in the analysis (bug specification, configuration description, reasoning rules, operating-system permission and privilege model, etc.). It has leveraged existing vulnerability-database and scanning tools by expressing their output in Datalog to feed the MulVAL Attack Paths Engine.

The inputs to the MulVAL Attack Paths Engine’s analysis are:

  • Advisories: What vulnerabilities have been reported and do they exist on my machines?
  • Host configuration: What software and services are running on my hosts, and how are they configured?
  • Network configuration: How are my network routers and firewalls configured?
  • Principals: Who are the users of my network?
  • Interaction: What is the model of how all these components interact?
  • Policy: What accesses do I want to allow?

The current MulVAL Attack Paths Engine data model relies on the exploit range (local or remote) and the privilege escalation consequence data that are stored in NIST NVD. The figure below shows the Attack Paths Engine chain with inputs and outputs. The colour codes are the same as for the previous figure. The metrics have references to critical paths, obtained from Common Vulnerability Scoring System (CVSS), which is a universal open and standardized method for rating IT vulnerabilities. .

The MulVAL Attack Paths Engine uses Datalog (a subset of Prolog) to produce logical attack graphs. It takes as input a set of first-order logical configuration predicates and produces the corresponding attack graph. These configuration predicates include network specific security policies, binding information and vulnerability data gathered from vulnerability databases. The MulVAL Attack Paths Engine identifies possible policy violations through logical inference.

Attack graph presents a qualitative view of security discrepancies:

  • It shows what attacks are possible, but does not tell you how bad the problem is.
  • It captures the interactions among all attack possibilities in your system.

CVSS provides a quantitative property of individual vulnerabilities:

  • It tells you how bad an individual vulnerability could be.
  • But it does not tell you how bad it may be in your system.

The idea is to use CVSS to produce a component metric, i.e. a numeric measure on the conditional probability of success of an attack step. The MulVAL Attack Paths Engine aggregates the probabilities over the attack-graph structure to provide a cumulative metric, i.e. the probability of attacker success in your system. Suppose there is a “dedicated attacker” who will try all possible ways to attack your system. If one path fails, he will try another. The cumulative metric is the probability that he can succeed in at least one path.

Service Level SIEM

Limitations of current SIEM (Security Information and Event Management) systems are mainly in line with performance and scalability leading to the inability to process vast amounts of diverse data in a short amount of time. Next generation of SIEM solutions should overcome these performance limitations of its predecessors allowing in this way to monitor more systems, to process more complex rules or even to correlate events at different layers. To achieve the above commented goals, the SIEM to be included in FI-WARE will incorporate a high performance parallel correlation engine that will improve drastically the correlation capabilities of the current SIEM solutions available in the market. In the context of FI WARE this high performance correlation engine will be built on top of the OSSIM (Open Source Security Information Management - http://www.ossim.net) however integration with other tools, such as Prelude or Sentinel, could be considered.

High performance correlation engine

Scored Attack Paths

The risk and impact scoring (hereinafter simply referred to as ’scoring’) is composed of two components:

1. Risk scoring

Risk scoring provides a numerical estimation on the risk associated to the entity under scrutiny. A score can represent either a probability, or a derivative value obtained from a set of probability values. Formal definitions of scores provided in the literature can be resumed as follows: Given an exploit e, and a condition c, the individual score p(e) stands for the intrinsic likelihood of an exploit e being executed, given that all the conditions required for executing e in the given attack graph are already satisfied. The cumulative score P(e) and P(c) measures the overall likelihood that an attacker can successfully reach and execute the exploit e (or satisfy the condition c).

2. Impact scoring

Impact scoring offers an assessment of the extent to which processes and security policies are impacted when a given IT asset target has been compromised. The impact may have different meanings based on the context and metrics employed during the computation of the impact score:

  • 1. Confidentiality,
  • 2. Integrity,
  • 3. Availability,
  • 4. Authorization,
  • 5. Authentication,
  • 6. Accountability,
  • 7. Implemented controls for assessing a threat’s severity.

Note that while risk scoring process targets three entities, i.e. (i)all vertices, (ii)an attack path, and (iii)an attack graph, impact scoring concerns only (ii) and (iii).

In this section we provide a description of the different categories of scores that are considered when assessing the risk and impact values required in the remediation process.

1. Individual and cumulative vertex scores

Examples of such scores are given in the following Figure, on the right of each vertex of the attack graph. As already mentioned, these types of scores are indeed probabilities, their values always being comprised between 0 and 1.

Partial view of a MulVAL attack graph.

2 Attack path score

The score of an attack path is a derivative of the individual and cumulative scores of the vertices that exist in a given attack path. Despite the fact that an attack path is mainly defined by the target vertex, the score of an attack path is attached to the latter as a whole. This category of scoring is employed during the selection of the remediation alternative. A schematic view of scoring an attack path is provided in the following Figure.

Attack path scoring process.

Botnet Tracking System

The NXDOMAIN-based Analysis focuses on the detection of «domain flux botnets», where the C&C domain names are frequently changed in order to escape from classical block-lists like the ones provided by DNSBL (Domain Name System Blacklists). This analysis relies on the observation of the behaviour of such botnets, and the way the bots try to locate their C&C servers. In order to find the domain name attached to the C&C server, the bots will request several domain names first, determined by more or less complex Domain Generating Algorithms (DGA): time-based, pseudorandom characters, dictionary based generation, etc. At a given time, such algorithms will generate a list of possible domain names to request, amongst them only one or few will be effectively reserved by the botmaster.

Because only few domain names are really associated to an IP address, bots will generate several DNS requests - answered by DNS errors - until finding an active domain. The target of the proposed solution is to detect abnormal error rates in order to identify and track the underlying botnet. Advantages of such approach is that:

  • The DNS error traffic represents only a small portion of the whole DNS flow, thus ensuring a better scalability of our approach, a faster detection and a “less intrusive” analysis for the end-users.
  • The DNS errors present a very limited meaning by themselves. Such analysis would not allow users’ profiling, and limit in that way the privacy impact.
  • The DNS error traffic presents a very high dispersion compared to the successful traffic. As for example, there will be a huge number of users doing requests to www.orange.com, while the probability for a user to request the non-existing domain whzejdqmvnt.dynserv.com is very low. Such characteristic makes DNS errors traffic easier to analyse in order to detect abnormal behaviours.

IoT Fuzzer

Fuzzing is a software testing technique that involves providing valid, invalid, unexpected or random information as input of an application. The program will then react to these inputs, reporting exceptions or crashes, failing in the normal behavior or keeping the normal flow, and the way the program reacts is monitored. The goal of the technique is to find unexpected scenarios that lead to situations which escape the normal flow and produce an unexpected behavior, in a highly automated, cost effective manner.

In the case of Internet of Things devices, the target application is either the protocol implementations or the applications that reside on a remote device; so, the IoT Fuzzer will work by sending messages through the network to the target device, in order to test its behavior.

6LoWPAN is the acronym of IPv6 over Low power Wireless Personal Area Networks, it's also the name of the IETF working group in charge of the protocol specification.

The 6LoWPAN working group has defined an encapsulation and header compression mechanism that defines a set of compression/decompression rules taking advantage of the most common messages sent through a typical wireless sensor network, and exploiting some features of the underlying layer, IEEE 802.15.4, and the upper layer, IPv6. Furthermore, duplication of information is avoided, allowing the protocol to send really short messages, and in this way, helping the device to save energy. The protocol also defines some special rules to fragment long IPv6 messages, being able, in this way, to work over link layers that support shorter packets than it requires.

Note on interaction between the Fuzzer and the IoT Work Package

The Protocol Adapter GE provides an adaptation layer between the Gateway and IoT devices, for devices that include an IP stack and support the CoAP protocol (from the IETF "CoRE" group), and the IoT Work Package concentrates on the application layer, and relies on existing standards for the lower network layers.

6LoWPAN & RPL are simply two of these standards, that allow to communicate with IoT devices using IPv6, and they are also being defined by the IETF (by the "6lowpan" and "roll" groups).

Hence, there is then no conflict between this GE and the IoT Work Package, as they don't target the same layers.

The Fuzzer can be used as-is by Use Cases that decide to deploy devices that use the 6LoWPAN stack, and it can support any protocol for which a Scapy module exists, like the ZigBee protocol stack.

In the event Use Cases decide to adopt other standards, and have an interest in the Fuzzer, the possibility to implement the necessary modules can also be considered.

Android Vulnerability Assessment Tool

The Android Vulnerability Assessment Tool is an OVAL (Open Vulnerability and Assessment Language) interpreter for Android devices.

Nowadays, the OVAL language is mostly used by vendors and leading security organizations in order to publish security related information that warns about current threats and system vulnerabilities. OVAL is an XML-based language and its repositories offer a wide range of security advisories that can be used for avoiding vulnerable states as well as augmenting networks and systems security considering best practices recommendations. From a technical perspective, a vulnerability can be considered as a combination of conditions that if observed on a target system, the security problem described by such vulnerability is present on that system. Each condition in turn can be understood as the state that should be observed on a specific object. When the object under analysis exhibits the specified state, the condition is said to be true on that system. In that context, OVAL vulnerability descriptions can be directly mapped to the usual way a vulnerability is understood, as shown in the following figure.

Vulnerability conception mapping

Within the OVAL language, a specific vulnerability is described using an OVAL definition. An OVAL definition specifies a criteria that logically combines a set of OVAL tests. Each OVAL test in turn represents the process by which a specific condition or property is assessed on the target system. Each OVAL test examines an OVAL object looking for a specific state, thus an OVAL test will be true if the referred OVAL object matches the specified OVAL state. The overall result for the criteria specified in the OVAL definition will be built using the results of each referenced OVAL test.

In general terms, the OVAL standard includes:

  • a vulnerability definition schema, that allows to represent a vulnerability in terms of system configuration, version and state that allow said vulnerability to be exploited;
  • a system characteristics schema, that allows to analyse the system, retrieve its current state and compare it to the one described in the definition;
  • a result schema, that allows to report the outcome of the analysis.

As an example, the following figure illustrates a situation where a vulnerability for the Android platform has just been disclosed. For this vulnerability to be present, two conditions must hold simultaneously: (I) the version of the platform must be 2.3.6 and (II) the file libsysutils.so must exist (thus N would be 2 in the previous figure). Such vulnerability can be expressed within an OVAL document by defining an OVAL definition that arranges two OVAL tests as a logical conjunction. One test is in charge of assessing the system version and the other one must check the file status. The OVAL objects used in these tests will be an object that represents the version of the system and other object that represents the required file, respectively. Finally, the OVAL states, one for the version and one for the file status, will express the states expected to be observed on each object for the tests to be true and hence, defining the truth or falsehood of the OVAL definition. In this particular example, it is expected to observe the value 2.3.6 as the version of the system, and the existence of the specified file. If these two properties are observed, then the vulnerability is present on the target system.

OVAL example for Android

Once an OVAL document has been specified, the regular approach to perform its assessment over a target system can be resumed in three main steps. As shown in the figure, step 1 consists in interpreting the document that specifies the objects and tests to be evaluated. At step 2, the target system is analyzed looking for present vulnerabilities. The OVAL analysis involves two parts, namely, the collection of required OVAL objects to be analyzed, and the comparison of collected OVAL items against the specified OVAL states. Finally, a report is produced at step 3 indicating the results of the assessment process.


The decision making support provides tools to security operators for proposing cost-sensitive remediations to attack paths.

The attack paths are shown to a security operator, ordered by their scores, which allow to easily understand the severity of the consequences of the attack paths. To calculate remediation (Figure 6) to the chosen attack path, the tool first extracts the necessary information from the attack path to be corrected. Then, it computes several lists of remediations that could reduce / cut this attack path. Finally, it estimates the cost of each list of remediations and proposes all the lists of remediations, ordered by cost, to security operators. Operators can choose one remediation list and, thanks to the remediation validation, check whether or not the system is more secure after the application of this remediation.

Figure 6: Remediation process.

To compute remediations, a remediation database is needed. It will be external to the GE, as the vulnerability database. This database makes a connection between vulnerabilities (for example thanks to a Common Vulnerabilities and Exposures identifier - CVE ID) and a possible adapted remediation. Several types of remediation could be used, for example a patch (it corrects a vulnerability) or a signature of known attacks (it prevents the exploitation of a vulnerability). To build the remediation database, information about patches can be extracted from publicly available Security Advisories (for example, coming from CERT-EU or the National Vulnerability Database). Information about signatures and the related vulnerability could be extracted from the signatures database that contains the CVE ID. The last type of remediation provided by the remediation tool can not be stored in the remediation database, because it is a topological remediation. This remediation is providing firewall rules that can prevent the intrusion of the attacker.

To sort the list of remediations, a cost function is applied to compute an estimate cost of each list. This cost contains two main components: operational costs and impact costs. The operational costs represent the costs caused by the deployment of the remediations (length of the deployment, maintenance costs, tests costs…) whereas the impact costs represents the negative impact (side effects) that could happen following a remediation deployment.

Visualization framework

Systems that monitor the security of a network, such as network probes as part of an Intrusion Detection Systems (IDS), can generate a large amount of data. It is generally agreed that one of the most effective ways in which large volumes of data are presented to a human is by the use of visual analytics techniques. The INTERSECTION Visualisation Framework aims to enable large quantities of data to be presented to users in ways that aid their understanding of it. It is a flexible framework that allows the easy combination of multiple sources of data and enables the easy combination of third party and bespoke visualisations. The Visualisation Framework, business-oriented, allows the user to choose which data is visualised and which visualisation techniques are used.

In addition, it is extensible, allowing the addition of new data sources, data processing and visualisations. The design of the system is built around the concept of web-based mash-ups, which combine content from multiple sources into an integrated experience, and rich Internet applications (RIA) have a similar set of features and functionality to desktop applications.

The functionality of the Visualisation Framework can loosely be considered in two parts: the Data Broker, which collates and manages data; and the Visualisation Web Application, which provides and controls the visualisation. The Visualisation Framework’s Data Broker interfaces with the various data sources to collect data. This can be achieved via a message queue, through accessing an external database or via some other means, depending on the source of this data. Data routes are created, as required, between the various data sources and end points. In all cases, the end points are adapters that transform incoming data into a common form, used throughout the visualisation component. Data is also stored in the visualisation database to allow a user to review historical as well as current data.

The Visualisation Web Application provides a number of key functions:

  • Serves up pages to a user allowing them to set-up and interact with visualisations.
  • Provides access to locally stored visualisations and facilitate the use of third party visualisations through the Internet.
  • A conduit between data held on the server and the user who is accessing the visual analytics system from a web-browser.
  • Allows the user to choose, configure and map data to visualisation axes as required.


Security Monitoring Architecture

We detail in the following the interactions between the components of the Security Monitoring Architecture, as well as their respective connections to the FI-WARE framework. We start by the three blocks composing the input for the Heterogeneous Event Normalization Service. The aim of this service is to normalize heterogeneous events so that they can be processed by the Service oriented SIEM. In order to be correlated by the SIEM, the events must be pertinent for the risk analysis.

Events into the front of this service are:

1. Context-Based Security & Compliance violation events, from GE provided by ATOS and SAP in WP8
2. Secure Storage Service events from GE provided by TCS in WP8.
3. Cloud, Internet of Things and Interface-to-networks events from GE provided in WP4, WP5 and WP7.

As for the Heterogeneous Event Normalization Service itself, it is part of WP8 and provides inputs for Service-level SIEM, Forensics Framework, and eventually Complex-Event Processing in Data/Context Management of WP6.

The Service-level SIEM provides its results directly to the Visualization Framework. Complex-Event Processing on the other hand, serves as input for both the Forensics and Visualization Frameworks. It can be deduces that the CEP can potentially be bypassed. The Security Monitoring enabler is intended to be used to assess compliance to the security requirements of Business Framework for the Applications and Services Ecosystem and Delivery (WP3). Security Monitoring employs the Complex Event Processing from Data/Context Management (WP6). Attack Paths Engine in Security Monitoring utilizes the Vulnerability Collections from Cloud/IOT/I2ND, the Vulnerabilities Database (NVD), and the Configuration Management Database (CMDB) in WP4, the latter being also involved. From the internal viewpoint of Security Monitoring, the Attack Paths Engine includes in its entries the Vulnerability Scanners operating on the network, and the Fuzzer block for assessing the applications’ security. Business-oriented Vulnerability requires as input the Configuration Management Database (CMDB) in WP4, and the vulnerability scoring it provides is employed by the Attack Path Engine, along with the other inputs of the latter. By combining the input from the Botnet Tracking System with the one from the Attack Paths Engine, the Counter-Measures App yields the proposed output to the Visualization Framework for further monitoring and decision-making purposes.

The decision making support will aim to provide some help to the security operator by proposing several possible countermeasures / remediation that could be deployed in the monitored system / services. To facilitate the decision making processes, assets contribute to early warning of harmful events, for the detection of suspicious behaviour, for correlation of heterogeneous security events and for the computation of critical attack paths. In addition, the man-machine interfaces ensure that solutions are effectively designed for end-users, providing them with increased efficiency. This would include advanced visualisation techniques to provide a more complete picture to handle complex situations efficiently.

Finally, a digital forensics for evidence consist to develop capabilities to trace illegal activity in cyberspace back to its origin. Correlating events provides the means to support the search for evidence process. Timeframe analysis will can be useful in determining when events occurred. For this, we can review the time and data contained in the file system metadata, linking error logs, connections logs, security events, alarms and files of interest to the timeframes relevant to the investigation.

The Security monitoring GE Chapter meets the requirements of ISO 27001 (to see 4.2 Establishing and managing the ISMS).

Among others things, it provides an answer to paragraph “c” (...Identify a risk assessment methodology that is suited to the ISMS), to paragraph “d” (identify the risks); paragraph “e” (Analyse and evaluate the risks); to paragraph “f” (Identify and evaluate options for the treatment of risks) and to paragraph “g” (Select control objectives and controls for the treatment of risks.)

In conclusion, the security monitoring enabler is composed of the following functionalities:

  • Normalization of heterogeneous events and correlation. This functionality covers the normalization and correlation of massive and heterogeneous security events.
  • Risk analysis. Considering the threat profiles and the related system vulnerabilities, a risk profile is built for each threat, containing qualitative values which measure the impact of the outcome of threats to the organization.
  • Decision making support. Countermeasures can be selected in order to mitigate the risks, for instance implementing new security practices within the organization, or taking the actions necessary to maintain the existing security practices or fixing the identified vulnerabilities.
  • Digital forensics for evidence. it deals with the acquisition of data from a source, the analysis of the data and extraction of evidence, and the preservation and presentation of the evidence. The digital evidence is intended to facilitate the reconstruction of events found to be malevolent or helping to anticipate unauthorized actions.
  • Visualization and reporting. It will provide a dynamic, intuitive and role-based User System Interface for the various stakeholders to use in order to understand the current security situation, to make decisions, and to take appropriate actions.

The GE as envisaged will address security monitoring and beyond, up to pro-active cyber-security i.e. protection of “assets” at large. The figure below provides a high-level initial architectural sketch of the Security Monitoring GE as envisaged in FI-WARE.

Basic Design Principles

MulVAL Attack Paths Engine

To determine the security impact software vulnerabilities have on a FIWARE architecture instantiation, one must consider interactions among multiple network components. The model used in the vulnerability analysis is able to automatically integrate formal vulnerability specifications from the bug-reporting community, but also from various vulnerability databases, specific to cloud hosting, the internet of things, l2N.. Also, the analysis is able to scale to networks with thousands of machines.

To achieve these two goals, the MulVAL Attack paths Engine, composed of an end-to-end framework and a reasoning system, conducts multihost, multistage vulnerability analysis on a FIWARE architecture. The MulVAL Attack Paths Engine adopts Datalog as the modeling language for the elements in the analysis (bug specification, configuration description, reasoning rules, operating-system permission and privilege model, etc.). It easily leverage existing vulnerability-databases and scanning tools by expressing their output in Datalog and feeding it to the Attack Path reasoning engine.

The reasoning engine consists of a collection of Datalog rules that captures the operating system behavior and the interaction of various components in the network. Thus integrating information from the bug-reporting community and off-the-shelf scanning tools in the reasoning model is straightforward. Reasoning rules specify semantics of different kinds of exploits, compromise propagation, and multihop network access. The rules are carefully designed so that information about specific vulnerabilities are factored out into the data generated from OVAL (Open Vulnerability and Assessment Language - MITRE) and ICAT (Categorization of Attacks Toolkit-NIST ). The interaction rules characterize general attack methodologies (such as “Trojan Horse client program”), not specific vulnerabilities. Thus the rules do not need to be changed frequently, even if new vulnerabilities are reported frequently.

The MulVAL Attack paths Engine uses an exploit dependency graph to represent the pre and post conditions for exploits. Then a graph search algorithm can “string” individual exploits and find attack paths involving multiple vulnerabilities. This algorithm is adopted in Topological Vulnerability Analysis (TVA), a framework that combines an exploit knowledge base with a remote network vulnerability scanner to analyze exploit sequences leading to attack goals. Compared with a graph data structure, Datalog provides a declarative specification for the reasoning logic, making it easier to review and augment the reasoning engine when necessary.

The reasoning engine scales well with the size of the network. Once all the information is collected, the analysis can be performed in seconds for networks with thousands of machines.

Service Level SIEM

A SIEM or Security Information and Event Management solution is a technology that provides real-time analysis of security events, aggregating data from many sources and providing the ability to consolidate and correlate monitored data to generate reports and alerts. A conventional SIEM deployment is mainly composed of four elements:

1. Sensors: deployed in the networks to monitor network activity. They usually include the low level detectors and monitors that passively collect data looking for patterns but also, they can include active scanners that try to compile information about node vulnerabilities or agents which could receive data from other hosts of this network.
2. Management Server: this component is in charge of the main processing activities such as normalizing, prioritizing, collecting, risk assessment and correlating engines.
3. Database: where all events and information configuration for the management of the system are stored.
4. Front-end: where the operator can visualize the status of the system and configure the SIEM.
SIEM main elements

From a functional point of view, the Service Level SIEM stack could be illustrated as showed in the next figure, where also the bypass of the OSSIM correlation engine with the high-performance correlation engine running in a Storm cluster is depicted.

OSSIM SIEM functional view

The OSSIM agent receives normalized event. Standardised events contain the following fields:

  • type: Type of event, Detector or Monitor.
  • date: date on which the event is received from the device.
  • sensor: IP address of the sensor generating the event
  • plugin_id: Identifier of the type of event generated
  • plugin_sid: Class of event within the type specified in plugin_id
  • priority: Possible deprecated (agent can't decide priority, just server)
  • protocol: Three types of protocol are permitted in these events. Should a different one reach the server, the event will be rejected:: TCP, UDP or ICMP
  • src_ip: IP which the device generating the original event identifies as the source of this event
  • src_port: Source port
  • dst_ip: IP which the device generating the original event identifies as the destination of this event
  • dst_port: Destination port
  • log: Event data that the specific plugin considers as part of the log and which is not accommodated in the other fields. Due to the Userdata* fields, it is used increasingly less.
  • data: Normally stores the event payload, although the plugin may use this field for anything else
  • username: User who has generated the event or user with whom it is identifying, mainly used in HIDS events
  • password: Password used in an event
  • filename: File used in an event, mainly used in HIDS events
  • userdata1: These fields can be defined by the user from the plugin. They can contain any alphanumeric information, and on choosing one or another, the type of display they have in the event viewer will change. Up to 9 fields can be defined for each plugin.
  • userdata2
  • userdata9

Scored Attack Paths

In order to score attack paths, the design of Scored Attack Paths is such that it requires the following data input:

1. Attack graph with cumulative vertex scores.

The main input element that the scoring process requires is the initial attack graph, along with individual and cumulative scores. Such an attack graph is provided by MulVAL Attack Paths Engine. It should be noted that while CVSS scores may also be used, it is nevertheless mandatory for the scoring process to utilize the risk scores that are calculated based on the configuration of the attack graph.

2. Process-IT ressource linkage information

The initial data that is necessary for assessing the impact score is the list of processes that are linked to the IT resource that is assumed to be compromised. This are provided by the chain Topological Data Extraction, Repository Model, CMDB.

3. Enumeration of impact metrics and values.

Once the Process-IT ressource linkage information is obtained, the impact metrics, as well as values for each metric, are required for the calculation of the impact score. Impact metrics may differ in interval values, type (either qualitative, or quantitative), and semantic meaning, yet these metrics cannot be subsumed by risk score values, and will commonly be provided by the organization in which the remediation tool is used. Since qualitative metrics can be converted to quantitative ones, the Scored Attack Paths assumes that all metrics are qualitative by default.

IoT Fuzzer

The IoT Fuzzer is built around the following principles:

  • the fuzzer engine itself uses the Scapy packet manipulation framework, to ease the process of packet analysis and creation;
  • the fuzzing process is driven by scenarios written in XML, that define sequences of exchanged packets:
    • outgoing messages defined in these scenarios will be altered by the fuzzing algorithms before being sent;
    • upon reception of the related response, it will be checked against the one defined in the scenario, to determine whether the tested device behaved properly;
  • to be able to inject crafted 6LowPAN packets onto the network, it uses an Atmel RZUSBstick (a 802.15.4 USB dongle) running a modified version of the Contiki OS that allows the hardware to relay packets without altering them.

Android Vulnerability Assessment Tool

The Vulnerability Assessment Tool is an Android application that mostly behaves as an Android Service. Most of the time, it stays idle in the background, waiting for new vulnerability descriptions or system events. Its architecture, illustrated in the following figure, has been designed as a distributed infrastructure composed of three main building blocks: (1) a knowledge source that provides existing security advisories, (2) Android-based devices running a self-assessment service and (3) a reporting system for storing analysis results and performing further analysis.

OVAL-based vulnerability assessment framework for the Android platform

The overall process is defined as follows. Firstly at step 1, the Android device periodically monitors and queries for new vulnerability descriptions updates. This is achieved by using a web service provided by the security advisory provider. At step 2, the provider examines its database and sends back new found entries. The updater tool running inside the Android device synchronizes then its security advisories. When new information is available or configuration changes occur within the system, a self-assessment service is launched in order to analyze the device at step 3. At step 4, the report containing the collected data and the results of the analyzed vulnerabilities is sent to a reporting system by means of a web service request. At step 5, the obtained results are stored and analyzed to detect potential threats within the Android device. In addition, this information can also be used with different purposes such as forensic activities or statistical analysis.


To compute the appropriate remediations and the hosts on which they can be deployed, the decision making support uses several techniques, according to the type of remediation. For the patches, thanks to the information contained in the MulVAL attack path, it is very simple to know on which machine the patch must be deployed to correct the vulnerability (it is in the same node: the leaf containing the vulnerability information).

But it is less simple for the deployment of snort or firewall rules, because the attacker packets route must be known to determine on which machine the remediation can be deployed. To know the route of the packets of the attacker, the topological information is used to recreate a simulated topology. With this route, the remediation tool just has to propose the remediation on the hosts containing a firewall or an intrusion prevention system, according to the type of remediation.

The topological information and a dependency graph are also used to estimate the impact costs of the remediation, especially for firewall rules deployment. Indeed, the system has to know which service(s) will be interrupted in order to compute these costs. To know that, all the dependencies, that are present in the dependency graph are checked in a simulated topology containing the remediation and are compared to the topology without remediation. If a service is disrupted due to the remediation, the corresponding cost is added to the remediation impact costs. The operational costs depend mainly of the type of remediation and on costs parameters that must be provided by the security operators, but their calculation is quite easy once this information is known.

Visualization framework

  • Decoupling of data from visualisations. A fundamental principle is that the data is stored in common formats to allow any visualisation to work with the data and to allow each visualisation to work with multiple / different data sets. This requires input data to be formatted in a suitable way to allow this (e.g. identification and location of common fields with other data). New data types can be accommodated, perhaps requiring some minor modifications to the Service. However, highly structured data, and particularly XML formats are preferred.
  • Real-time input. The Service is designed to receive data in real time, and is particularly suited for publish-subscribe architectures.

Re-utilised Technologies/Specifications

Attack Path Engine

The Attack Path Engine uses the report of vulnerability scanners. Scanners can run asynchronously on each host and which adapts existing tools such as OVAL to a great extent—and an analyzer, run on one host whenever new information arrives from the scanners.

An OVAL scanner takes such formalized vulnerability definitions and tests a machine for vulnerable software. The result is converted into Datalog clauses like the following:

vulExists(webServer, ’CAN-2002-0392’, httpd).

Namely, the scanner identified a vulnerability with CVE id CAN-2002-0392 on machine webServer. The vulnerability involved the server program httpd. However, the effect of the vulnerability—how it can be exploited and what is the consequence — is not formalized in OVAL. NVD the vulnerability database developed by the National Institute of Standards and Technology (NIST), provides the information about a vulnerability’s effect through CVSS Impact metrics. The relevant information is converted from CVSS into Datalog clauses such as:

vulProperty(’CAN-2002-0392’, remoteExploit,privilegeEscalation.

The Attack Path Engine models elements in Datalog. The model elements are recorded as Datalog facts. The Attack Path Engine requires all Datalog facts to be defined prior to performing any analysis. Missing or incorrect facts will result in a misleading analysis of the system being modelled. The following table shows the elements modelled by the Attack Path Engine and their Datalog fact statements sorted by the DAP layer to which they belong.

How the Attack Path Engine Datalog facts interrelate is recorded as Datalog reasoning rules that are shown in the following table.

With the occurrence of new vulnerabilities, assessment of their security impact on the network is important in choosing the right countermeasures: patch and reboot, reconfigure a firewall, dismount a file-server partition, and so on.

The next figure show the sequence diagram with the interaction beetwen thhe other components:

Vulnerability Data Interface Description

-<definition class="vulnerability" id="oval:org.mitre.oval:def:99" version="4">

     IE v6.0 Content Disposition/Type Arbitrary Code Execution

-<affected family="windows">

  <platform>Microsoft Windows 2000</platform>
  <product>Microsoft Internet Explorer</product>

</affected> <reference ref_id="CVE-2002-0193" ref_url="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2002-0193" source="CVE"/> -<description>

  Microsoft Internet Explorer 5.01 and 6.0 allow remote attackers to execute arbitrary code via malformed Content-Disposition and  Content-Type header fields that cause the application for the spoofed file type to pass the file back to the operating system for handling rather than raise an error message, aka the first variant of the "Content Disposition" vulnerability.


   -<submitted date="2004-01-27T05:00:00.000-04:00">
      <contributor organization="The MITRE Corporation">Andrew Buttner</contributor>
   -<modified comment="modified wrt-222 - changed pattern match" date="2005-03-07T05:00:00.000-04:00">
     <contributor organization="The MITRE Corporation">Christine Walzer</contributor>
    <status_change date="2005-03-09T05:00:00.000-04:00">INTERIM</status_change>
    <status_change date="2005-03-29T05:00:00.000-04:00">ACCEPTED</status_change>
   -<modified comment="Changed IE registry test to wrt-18" date="2005-09-20T04:00:00.000-04:00">
       <contributor organization="The MITRE Corporation">Christine Walzer</contributor>
    <status_change date="2005-09-21T01:27:00.000-04:00">INTERIM</status_change>
    <status_change date="2005-10-12T05:49:00.000-04:00">ACCEPTED</status_change>
   -<modified comment="Added negate=true attribute to criteria sub-block to fix conversion error from OVAL 4.2 to OVAL 5.0" date="2006-07-03T12:56:00.000-04:00">
     <contributor organization="The MITRE Corporation">Matthew Wojcik</contributor>
   <status_change date="2006-07-03T12:56:00.000-04:00">INTERIM</status_change>
   <status_change date="2006-09-27T12:29:41.221-04:00">ACCEPTED</status_change>
  -<modified comment="Multiple corrections and update to POSIX compatibility for ste:2878" date="2010-11-29T16:13:00.904-05:00">
     <contributor organization="G2, Inc.">Shane Shaffer</contributor>
   <status_change date="2010-11-29T16:14:04.414-05:00">INTERIM</status_change>
    </dates><status>INTERIM</status></oval_repository> </metadata><criteria comment="Software section" operator="AND"><criterion comment="the version of mshtml.dll is less than 6.0.2716.2200" negate="false" test_ref="oval:org.mitre.oval:tst:3086"/><criterion comment="the patch q321232 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3119"/><criterion comment="the patch q323759 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3118"/><criterion comment="the patch q328970 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3117"/><criterion comment="the patch q324929 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3116"/><criterion comment="the patch q810847 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3115"/><criterion comment="the patch q813489 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3114"/><criterion comment="the patch q818529 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3113"/><criterion comment="the patch q822925 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3112"/><criterion comment="the patch q828750 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3111"/><criterion comment="the patch q824145 is installed (Installed Components key)" negate="true" test_ref="oval:org.mitre.oval:tst:3110"/><criteria comment="Windows 2000 Service Pack 4 (or later) is installed" negate="true" operator="AND"><criterion comment="Windows 2000 is installed" negate="false" test_ref="oval:org.mitre.oval:tst:3085"/><criterion comment="SP4 or later Installed" negate="false" test_ref="oval:org.mitre.oval:tst:3073"/></criteria><criterion comment="Internet Explorer 6 is installed" negate="false" test_ref="oval:org.mitre.oval:tst:3090"/></criteria> 

Topological Data Network Interface Description

 The reachability input is like "hacl(HOST1, HOST2, Protocol, Port)", where "hacl" means "host access control list". 

Service Level SIEM

In the Service Level SIEM have been integrated the following existent technologies and specifications:

The Service Level SIEM included in the FI-WARE Security Monitoring GE is built on top of OSSIM (Open Source Security Information Management), one of the most widely used Open Source SIEM. It provides the core engine with standard features in a SIEM such as:

  • Collect security events from security components, sensors and service events and normalize them.
  • Produce event filtering, event aggregation, event masking and analyze dependencies between events.
  • Analysis of correlated events to detect security risk and attacks at network-level (for example port scanning or accesses by brute force attack).
  • Visualization of events and alarms in a Dashboard.

The event filtering and correlation processes included in the Service Level SIEM are packaged as topologies to be run in a Storm cluster. Storm provides real-time computation of a large volume of data in a scalable, distributed and fault-tolerant way.

  • Event Processing Language (EPL)

The Event Processing Language (EPL) is the language used in the Service Level SIEM to perform complex event processing. It allows the user to define with a quite simple and clear language based on SQL syntax a pattern of incoming events to detect in the correlation process. Furthermore, EPL is a declarative language suitable for dealing with high frequency time-based event data in the Service Level SIEM.

Scored Attack Paths

Scored Attack Paths utilizes the following technologies:

  • Berkeley XML Database

The Berkeley DB XML database specializes in the storage of XML documents, supporting XQuery via XQilla. It is implemented as an additional layer on top of (a legacy version of) Berkeley DB and the Xerces library.

  • JDOM

JDOM is an open source Java-based document object model for XML that was designed specifically for the Java platform in order to exploit its language features.

  • Apache Commons

Commons Math is a library of lightweight, self-contained mathematics and statistics components addressing the most common problems not available in the Java programming language or Commons Lang.

Attack graph data interface description

 <fact>RULE 4 (multi-hop access)</fact>
 <fact>RULE 5 (direct network access)</fact>

Botnet Tracking System

Before the analysis we need to anonymize the IP addresses of the clients in order to preserve their privacy using a reversible hash function. Because some errors can be directly associated to misconfigured software, a first step is to filter the error traffic using the following criteria:

  • Only the DNS domain names longer than 6 characters are proceeded, as short domain names have been exhausted by generic web sites and cannot therefore be used for domain flux;
  • All the requests made on non-existing Top Level Domain (TLD) like ’.home’ and ’.local’ (mostly linked to Apple Bonjour protocol) and ’.arpa’ (reverse lookup which is rarely implemented) are discarded;

representing the 3rd most popular TLD on the L root server. Such filters are therefore useful more for performance reasons than for algorithm issues. Once the NX error traffic is expurgated from those generic errors, we build up a bipartite graph establishing the relations between failed queries of non existing domains and clients. Such graph allows us to identify communities of users with strong connectivity, i.e. doing similar errors in a short time frame. A cyclic analysis (every 60 seconds) is then made on the identified sub-graphs in order to compute a Malware Probability Factor (MPF) for each erroneous domain.

IoT Fuzzer

The fuzzer communicates with IoT devices, through a 802.15.4 network interface connected to the fuzzing platform, which must be in range of the devices, and must be capable of relaying raw Link Layer frames.

The fuzzer is driven by XML scenarios that define the sequence of packets to be sent, and how the fuzzed system is expected to reply to these packets.


Android Vulnerability Assessment Tool

The Android client is composed of four main components: (1) an update system that keeps the internal database up-to-date, (2) a vulnerability management system in charge of orchestrating the assessment activities when required, (3) an OVAL interpreter for the Android platform and (4) a reporting system that stores the analysis results internally and sends them to an external reporting system. These components are depicted in the following picture.

The Android client is executed as a lightweight service that is running on background and that can be awakened by two potential reasons. The first one is that the update system in charge of monitoring external knowledge sources has obtained new vulnerability definitions; the second one is that changes in the system have occurred hence it is highly possible that some vulnerability definitions need to be re-evaluated.

In order to be aware of these two potential self-assessment triggers, two listeners remain active. The updater listener listens the vulnerability database updater component and will be notified when new vulnerability definitions become available. The event bus listener uses the Android broadcast bus to capture notifications about system changes. If new vulnerability definitions are available or system changes have been detected, a vulnerability definition selection process is launched. This process is in charge of analyzing the cause that has triggered the self-assessment activity and deciding which assessment tasks must be performed. Afterwards, the vulnerability manager component uses the services of the OVAL checker in order to perform the corresponding assessment activity. The results of the assessment are then stored in the internal results database and sent to the external reporting system by performing a web service request. Finally, a local notification is displayed to the user if new vulnerabilities have been found in the system.


The decision-making support interacts mainly with three components: the Scored Attack Paths, the MulVAL Attack Graphs Engine, the Topological Data Extraction, and the Visualisation Framework. To these are added all the information from other components, that are required for the Remediation computational purposes.

The interactions between the Scored Attack Paths and the remediation are necessary because attack paths are the starting point of the remediation process. Security operators need to select an attack path to remediate in the list provided by the Scored Attack Paths. On the other hand, the attack path engine is also useful to validate the remediations selected by the security operators. This feedback is necessary to compare the security state of the system before and after deploying a remediation.

The interactions between the remediation tool and the topological data extraction are necessary because the network topology (hosts, routes, deployed firewall rules…) is necessary to compute the topological related remediation (attack signature deployment and firewall rules). The remediation tool provides also a mean to apply automatically some of the remediations chosen by the security operators. To do that, this tool needs to change some parameters (for example add a firewall rule) in the topological data extraction.

That is why the decision making support provides the following interfaces:

  • An internal interface with the ‘Scored Attack Paths’ to receive the attack paths to be reduced.
  • An external interface to send back the remediations selected by security operators to the Attack Path Engine, that allows to validate this remediation.
  • An internal interface with the Topological Data Extraction to get the network topology
  • An external interface with the Topological Data Extraction to apply some of the remediations selected and validated by security operators
  • A GUI, interacting with the Visualisation Framework, to allow security operators to select attack path to correct, browse remediations and there estimated cost, select a list of remediations to deploy and, when necessary, to validate this list.

Visualization framework

The Visualisation Framework offers a visualisation service that allows users to visualise data from multiple network components. The user accesses the visualisation service through a standard web-browser connected to the web-application server using some network connection. The user will experience a single integrated application showing multiple visualisations. Behind the scenes, the browser will compliment the information from the visualisation server with data and functionality directly from the Internet.

Users of the framework will follow a similar pattern of creating, interacting with, modifying and eventually removing visualisations. There are therefore the three main interactions between users and the Visualisation Framework: adding a new visualisation; modifying an existing visualisation; removing a visualisation.

Add new visualisation enables a user to view a new visualisation. The user selects the visualisation and data type from a list of available options. External visualisations that support the existing data formats can also be added. The user can customise the visualisation, e.g. by choosing the size of the window. A sequence diagram for the interaction is shown in Figure 2.


Modify visualisation enables a user to modify an existing visualisation. The user can change the type of data displayed, the size of the window, how often the visualisation is updated. The interactions for modifying a visualisation are shown in the sequence diagram in Figure 3.


Remove visualisation enables a user to remove a window containing a visualisation from the display. A sequence diagram for the interaction is shown in Figure 4.


Detailed Specifications

Following is a list of Open Specifications linked to this Generic Enabler. Specifications labeled as "PRELIMINARY" are considered stable but subject to minor changes derived from lessons learned during last interactions of the development of a first reference implementation planned for the current Major Release of FI-WARE. Specifications labeled as "DRAFT" are planned for future Major Releases of FI-WARE but they are provided for the sake of future users.

Open API Specifications

Security Monitoring Generic Enabler APIs are under construction. The following initial functionalities are available since September 2012:

Terms and definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will be help to carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP). For a summary of terms and definitions managed at overall FI-WARE level, please refer to FIWARE Global Terms and Definitions

  • CVE. Common Vulnerabilities and Exposures is a dictionary of publicly known information about security vulnerabilities and exposures.
  • Event. A software message indicating an observable or an extraordinary occurrence.
  • IDS. Intrusion Detection System.
  • Sensor. Devices deployed to monitor network activity. They usually include the low level detectors and monitors that passively collect data but they can also include active scanners.
  • SIEM. Security Information and Event Management is a technology that provides real-time analysis of security alerts. It aggregates data from many sources, providing the ability to consolidate monitored data to notify immediates issues.

Personal tools
Create a book