We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FIWARE QA Activities - FIWARE Forge Wiki

FIWARE QA Activities

From FIWARE Forge Wiki

(Redirected from FIWARE QA Chapter)
Jump to: navigation, search

The Quality Assurance chapter is mainly devoted to analyze and asess the level of quality of FIWARE Generic Enablers, from a functional and non functional point of views, that is, not only checking that the GEs are working as they are expected to work according to specifications and additional documentation, but also providing a good level of performance, stability and scalability.

For this reason, the QA chapter is organized in two sub-tasks:

Contents

Functional Testing

This sub-task is led by ENG and participated by Fraunhofer, Grassroot and EGM. Its main duty is to test all GEs of the Catalogue to check in three consecutive phases:

Completeness, consistency and soundness of the documentation

This task is reading and assessing the published content by GEs in the FIWARE Catalogue, validating that it is complete, consistent and well written without errors and missing information. This task is now completely over and with a success rate of 90% for most of GEs. Only I2ND chapter is below 40%. The missing or incorrect information has been notified to the GE owners for resolution.

Here follows the details about tests executed for each Chapter/GE

Doc Tests By Chapter
Doc Tests Results By Chapter

GEs APIs

This task is in charge of executing calls to the public APIs of all GEs following the provided specifications, assessing how correct are the specifications and if the APIs are following such specifications. At this point of time, this task is completed and the success rate almost 100% for all GEs, except for IoT chapter which is around 80%.

Here follows the details about tests executed for each Chapter/GE.

API Tests By Chapter
APITests Results By Chapter

Integration on the reference architecture

This task has performed following steps:

1) Choice of the GEs that compose the integrated FIWARE platform

The figure of the overall architecture shows the GEs that will be involved to test the integration

Integrated architecture tested

2) analyze the functional scenarios for using some of the main GE interfaces

3) planning and writing of the test cases for each functional scenario in order to highlight the interaction among the GEs of the platform

4) and run the test cases

GEs courses in FIWARE Academy

Besides these three phases, this sub-task is also in charged, as was decided by the FIWARE Technical Committee, of checking out the content of the GEs courses in the FIWARE Academy, and it will be a criteria more for contributing the quality assessment of the GEs.

Here follows the details about the review of each GEs course in the FIWARE Academy.

Academy Courses Testing

The results of this work is maintained in a dashboard which is collecting all the tests done, the status of the work in progress and the results obtained so far. It can be consulted here [[1]]

NGSI APIs testing

Tests Generation

In order to test the NGSI interfaces of FIWARE Generic Enablers implementations, Easy Global Market used Model Based Testing (MBT) methods. MBT is the automatic generation of test procedures, using models of system requirements and behaviors taken from the specifications. Although this type of testing requires more up-front effort in building the model, it offers substantial advantages over traditional software testing methods:

- Design more & code less.

- High coverage. Tests continue to find bugs, not just regressions due to changes in the code path or dependencies but also specification problems.

- Model authoring is independent of implementation and actual testing so that these activities can be carried out by different members of a team concurrently.

- Clear traceability from requirements expressed within the specification to the test results

The fundamental MBT process includes activities such as test planning and controls, test analysis and design (which includes MBT modeling, choosing suitable test selection criteria), test generation, test implementation and execution

MBT process

The preceding figure present the MBT process we used for testing FIWARE generic enabler implementations. As given by the figure, in a classical MBT process, test architects takes requirements and defines the test objectives as input to model the System Under Test (SUT) (step 1). This MBT model contains static and dynamic views of the system. Hence, to benefit as much as possible from the MBT technology, we consider an automated process, where the test model is sent as an input to the test generator that automatically produces abstract test cases and a coverage matrix, relating the tests to the covered model elements or according to another test selection criteria (step 2). These tests are further exported, automatically (step 3), in a test repository to which test scripts can be associated. The automated test scripts in combination with an adaptation layer link each step from the abstract test to a concrete command of the SUT and automate the test execution (step 4). In addition, after the test execution, tests results and metrics are collected and feedback is sent to the GE implementers/specification writers.

Tests execution and results

In order to execute the test the FIWARE micro-service paradigm has been followed and a testbed set-up allowing to change abstract test cases supplied by the MBT tool into executable one. Execution of each test step produces a result log. As an illustration, the following figure provide execution result as obtained from the execution of test cases obtained from the NGSI v2 model onto Orion GEi version 1.2.1.

 <execution-results>
   <timestamp>2016-09-06T14:58:25.654</timestamp>
   <execution-time-ms>33401</execution-time-ms>
   <suitesTotalNumber>1</suitesTotalNumber>
   <casesTotalNumber>135</casesTotalNumber>
   <stepsTotalNumber>252</stepsTotalNumber>
   <testCasesResults>
     <resultPass>101</resultPass>
     <resultFailed>34</resultFailed>
   </testCasesResults>
   <testStepsResults>
     <resultPass>218</resultPass>
     <resultFailed>34</resultFailed>
   </testStepsResults>
 </execution-results>

This shows that from the 135 generated test cases 101 passed whereas 34 failed. The causes have been identified and illustrate the impacts of functional testing:

- Increase specification robustness: in several point, the specification (being under development at time of writing) is not clear enough, as it does not define the errors to be generated on each NGSI operation.

- Pinpoint implementation issue: in the above example, the tested implementation was not respecting the specification in respect with specified MIME types

Such items being feedback to the development teams allow continuous, and early, detection of issues, thus increasing the overall quality.

Automatic documentation testing

To keep quality of the Catalogue documentation compliant with the Guide [2] with minimum effort, Fraunhofer developed a document analytics solution based on scraped page data. First compliance checks show that there best scores measured by the maximum measurable compliance are not higher than 35%. As further improvement, Fraunhofer proposes a weekly notification tool to raise a notice to Enabler owners.

Non-functional (or stress) testing

This task is led by ATOS and participated also by ENG. It was conceived by the core members of FIWARE as a mean for supporting the trust and confidence in FIWARE GEs for commercial purposes. Demonstrating that FIWARE GEs are running in good performance and stability conditions is a must for commercial exploitation of FIWARE.

Stress testing method

The task started by defining a light method for defining and executing the tests, as summarized in the figure below.

Stress Testing Method

Tested GEs and results (for performance, stability and reliability)

Then, the GEs were prioritized by the most used criteria to define the list of GEs to be tested. As the GEs are continuously evolving, the tests were re-executed after each new release, so during the project, following GEs and releases were tested:

GEs and obtained results of release 5.1 stress testing

GEs and obtained results of release 5.1 stress testing

GEs and obtained results of release 5.2 stress testing

GEs and obtained results of release 5.2 stress testing

GEs and obtained results of release 5.3 stress testing

GEs and obtained results of release 5.3 stress testing
GEs and obtained results of release 5.3 stress testing

All the detailed reports with the tests execution and results can be found at [[3]] in FIWARE Quality Assurance folder

Test of bundles

Due to the fact some GEs are normally working together in a bundle, the task has also tested two of the most used combinations:

AuthZForce + Wilma + KeyRock

The performance of this bundle in stress conditions is average, not especially bad, but improvable. It was obtained 464 ms (min: 24ms, max: 3963ms) with average 99 requests/s, but with a number of concurrent users (threads) bigger than 30, response times increased significantly up to almost 4 seconds. Positively, there were no crashes after long time of working and no errors were found during execution.

The detailed report can be downloaded from here [[4]]

Context Broker + IDAS + Cygnus

The performance of this GEs combination is very good obatining 280 measures per second for 300 concurrent threads. Besides the error rate is zero providing a good reliability. the stability along the time is also very good having no crashes at all.

The full report can be seen at [[5]]

Scalability tests

Only the Context Broker has been subjected to these tests for the moment. The intention was to assess the CB behaviour when system resources were increased. Concretely, following tests and results were obtained:

- Increasing the number of CB nodes did not improve the performance

- Increasing the number of database nodes improved significantly the performance at adding a second database; further database additions had less impact on improving the performance

- Increasing the number of connections of CB to the database increased just slighly the performance

The detailed report about the obtained results can be seen at [[6]]

External assessment

In order to be sure that the applied method and obtained results were complaint and reliable enough, it was decided to hire an external assessment of an independent expert, Prof. Dr. Holger Schlingloff, Chief Scientist of the System Quality Center (SQC) at the Fraunhofer Institute. He carried out an analysis of our work February 2016 concluding with the following statements:

- In general, the process for non-functional testing is adequate and the preliminary results obtained by the non-functional tests are satisfactory

- It is recommended to get reference values from GE owners, that is GE metrics in normal working conditions. This has been already requested to GE owners, but few inputs have been received

- Combination of GEs should be better analyzed since there could be possible bottlenecks in bundles. It has been addressed for the two most used bundles.

- Better interpretation of results to help better the GE owners in improving the results

- In mid-term, non-functional testing should be linked to other software-development activities

The complete report can be seen at [[7]]

A new assessment is foreseen before end of the year, considering the latest performed work.

Code and guidelines

For the moment, the stress testing process cannot be integrated in a continuos integration software process and it is not available in the project. Then, the tests have to be replicated after each release manually. In order to allow anyone (external users of GEs, GE onwers to replicate the behaviour of their GE, or quality experts to assess FIWARE GEs ) to be able to execute same tests team of this task did, it is provided at GitHub the guidelines, scripts and additional software to pass once and again the tests. Feel free to do it accessing at [[8]].

Labelling Model

Having the results after testing the GEs is not enough to provide a tangible value to the GEs users. It would be quite useful to have a reference framework where to establish the quality level of each GE. Following a proven model as the EU energy devices labelling, it was adapted to be applied to FIWARE GEs.

There will be a label established per GE and testing criteria, considering both functional and non-functional aspects, but consolidating all the labels in an overall label which will be the lowest label obtained through all categories. In the cube below, it can be seen 7 levels of rating, two categories of criteria, and 3 criteria per category: usability, reliability and efficiency for functional aspects; and stability, performance and scalability for non-functional aspects.

Labelling Cube example

The Catalogue has been modified to insert the overall label for each GE (average value of the rest of labels), spreading out and visualizing all sub-labels when clicking on the overall label.

Orion labels in Catalogue

Resources

  1. Mailing list: https://lists.fiware.org/listinfo/fiware-qualityassurance
  2. GitHub:
    1. https://github.com/Fiware/test.NonFunctional
    2. https://github.com/Fiware/test.Functional
Personal tools
Create a book