We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FIWARE.OpenSpecification.WebUI.CloudRendering R5 - FIWARE Forge Wiki

FIWARE.OpenSpecification.WebUI.CloudRendering R5

From FIWARE Forge Wiki

Jump to: navigation, search
Name FIWARE.OpenSpecification.WebUI.CloudRendering
Chapter Advanced Web-based User Interfaces,
Catalogue-Link to Implementation Cloud Rendering
Owner ADMINOTECH, Jonne Nauha



Within this document you find a self-contained open specification of a FIWARE generic enabler, please consult as well the FIWARE Product Vision, the website on http://www.fiware.org and similar pages in order to understand the complete context of the FIWARE platform.


Legal Notice

Please check the following Legal Notice to understand the rights to use these specifications.


In some cases it may be impossible, inconvenient, or not allowed to transmit and render the user interface content on a client device. Performance may be inadequate for achieving certain user experience goals, it may be more appropriate to save battery power, or for IP reasons, a designer may decide that the content of the user interface not be delivered to a client machine. In such cases, it should be possible that the UI is rendered on a server in the cloud, forwarding the display to and receiving input from client in a remote location.

The goal of this GE is to provide a generic way to request, receive and control a video stream of a remote 3D application. The complexity and usual heavy performance requirements for a 3D application can be offloaded to a server, from a low end device that could not handle the rendering otherwise.

Basic Concepts

Web Service

This web service receives requests from clients for video streams. This service should have a generic API for requesting, controlling and closing video streams.

The service logic behind the API depends on the application. In a usual case this will require starting a process to handle the rendering and to actually serve the video stream to the end user. The web service needs to communicate connection information (host, port) for the end user to continue communications with the actual streaming server.


Depending on the application this process may serve one or multiple end users with a video stream. Once this server starts it will communicate with the Web Service a WebRTC port that gets communicated back to the end user.

WebRTC will be utilized for live video/audio streaming and additionally its data channel can be used to send custom communications, e.g. input events from the client.

Web Client

The client will be an example for the web browser and JavaScript side on how to communicate with the Web Service and continue to receive the video stream from the Streaming Server.

Generic Architecture

Architecturally Cloud Rendering splits into three main component. The web service, web client(s) and renderer(s). Web service is the top level service that acts as the WebSocket server and helps clients connect to a renderer. Once the WebRTC connection is established between the renderer and the client, video streaming and additional input events are communicated via the WebRTC connection.

The WebSocket protocol is defined in the Cloud Rendering Open API Specification page.

Main Interactions

Here is a simplified look at the interactions between the three parts in this GE.

Sender Message Receivers Notes
// A new renderer is started
Renderer Registration Web Service  
// A new client registers to the service
Client Registration Web Service Clients peerId = "1"
// Web service assigns a renderer and the client to the same room
Web Service RoomAssigned Renderer & Client roomId" : "room one", "error"  : 0
// Client wants to start the video stream
Client Offer Web Service  
WebService Offer Renderer  
Renderer Answer WebService "receiverId" : "1"
WebService Answer Client  
// Client and server start a peer to peer video stream
Renderer WebRTC Web Service  

Basic Design Principles

  • The protocol should NOT force any application logic. It should be a generic service that registers renderers, clients and joins them together for p2p communications.
  • The protocol should allow any kind of application level messaging to be implemented within the spec. The "Application" channels messages is meant for this, the data in these messages is free for the GE implementation to exploit. If you implement all three components you can do custom messages from any component to the renderer and the service. However, if you only wish to do client-to-client custom messaging you can use the reference implementation of the service and renderer.
  • Similarly any application specific input handling should be done at the application level messaging. Input is too complex to make a generic works-in-all-apps mold, it is best left for the implementation to handle. We provide structured application level messaging directly in the protocol.
  • The protocol needs to support delayed/lazy startup of renderer processes. We cannot assume all renderers are always running, should be able to be started on demand.
  • The system needs to scale across multiple physical machines on the network. You can have load balancing in from of multiple Cloud Rendering Services to have sufficient amount of active/open WebSocket connections.

Detailed Specifications

Re-utilised Technologies/Specifications

  • WebRTC
    • Library will be a main component in the specification, WebRTC is used to establish a peer to peer connection and streaming the rendering results to the clients from the renderer. Web client will use existing deployed web browser support for WebRTC.
    • W3C Working Draft for WebRTC 1.0
    • Project homepage
  • realXtend Tundra
    • Tundra will be used to implement the reference solution server plugins that implements the video streaming to the end user. The work will be done and published as open source.
    • Project homepage

Terms and definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will be help to carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP). For a summary of terms and definitions managed at overall FI-WARE level, please refer to FIWARE Global Terms and Definitions

Annotations refer to non-functional descriptions that are added to declaration of native types, to IDL interface definition, or through global annotations at deployment time. The can be used to express security requirements (e.g. "this string is a password and should be handled according the security policy defined for password"), QoS parameters (e.g. max. latency), or others.
AR → Augmented Reality
Augmented Reality (AR)
Augmented Reality (AR) refers to the real-time enhancement of images of the real world with additional information. This can reach from the rough placement of 2D labels in the image to the perfectly registered display of virtual objects in a scene that are photo-realistically rendered in the context of the real scene (e.g. with respect to lighting and camera noise).
IDL → Interface Definition Language
Interface Definition Language
Interface Definition Language refers to the specification of interfaces or services. They contain the description of types and function interfaces that use these types for input and output parameters as well as return types. Different types of IDL are being used including CORBA IDL, Thrift IDL, Web Service Description Language (WSDL, for Web Services using SOAP), Web Application Description Language (WADL, for RESTful services), and others.
Middleware is a software library that (ideally) handles all network related functionality for an application. This includes the setup of connection between peers, transformation of user data into a common network format and back, handling of security and QoS requirements.
PoI → Point of Interest
Point of Interest (PoI)
Point of Interest refers to the description of a certain point or 2D/3D region in space. It defines its location, attaches meta data to it, and defines a coordinate system relative to which additional coordinate systems, AR marker, or 3D objects can be placed.
Quality of Service (QoS)
Quality of Service refers to property of a communication channel that are non-functional, such a robustness, guaranteed bandwidth, maximum latency, jitter, and many more.
Real-Virtual Interaction
Real-Virtual Interaction refers to Augmented Reality setup that additionally allow users to interact with real-world objects through virtual proxies in the scene that monitor and visualize the state in the real-world and that can use services to change the state of the real world (e.g. switch lights on an off via a virtual button the the 3D scene).
A Scene refers to a collection of objects, which are be identified by type (e.g. a 3D mesh object, a physics simulation rigid body, or a script object.) These objects contain typed and named data values (composed of basic types such as integers, floating point numbers and strings) which are referred to as attributes. Scene objects can form a hierarchic (parent-child) structure. A HTML DOM document is one way to represent and store a scene.
Security is a property of an IT system that ensures confidentiality, integrity, and availability of data within the system or during communication over networks. In the context of middleware, it refers to the ability of the middleware to guarantee such properties for the communication channel according to suitably expressed requirements needed and guarantees offer by an application.
Security Policy
Security Policy refers to rules that need to be fulfilled before a network connection is established or for data to be transferred. It can for example express statements about the identity of communication partners, properties assigned to them, the confidentiality measures to be applied to data elements of a communication channel, and others.
Synchronization is the act of transmitting over a network protocol the changes in a scene to participants so that they share a common, real-time perception of the scene. This is crucial to implementing multi-user virtual worlds.
Type Description
Type Description in the context of the AMi middleware refers to the internal description of native data types or the interfaces described by an IDL. It contains data such as the name of a variable, its data type, the hierarchical relations between types (e.g. structs and arrays), its memory offset and alignment within another data type, and others. Type Description are used to generate the mapping of native data types to the data that needs to be transmitted by the middleware.
Virtual Character
Virtual Character is a 3D object, typically composed of triangle mesh geometry, that can be moved and animated and can represent a user's presence (avatar) in a virtual world. Typically supported forms of animation include skeletal animation (where a hierarchy of "bones" or "joints" controls the deformation of the triangle mesh object) and vertex morph animation, where the vertices of the triangle mesh are directly manipulated. Virtual character systems may support composing the character from several mesh parts, for example separate upper body, lower body and head parts, to allow better customization possibilities.
WebGL → (Web Graphics Library) is a JavaScript API for rendering 3D and 2D computer graphics in web browser.
Personal tools
Create a book