We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FIWARE.OpenSpecification.WebUI.3D-UI - FIWARE Forge Wiki


From FIWARE Forge Wiki

Jump to: navigation, search
Name FIWARE.OpenSpecification.WebUI.3D-UI
Chapter Advanced Web UI Architecture,
Catalogue-Link to Implementation XML3D
Owner DFKI, Torsten Spieldenner (DFKI)



Within this document you find a self-contained open specification of a FIWARE generic enabler, please consult as well the FIWARE Product Vision, the website on http://www.fiware.org and similar pages in order to understand the complete context of the FIWARE platform.


  • Copyright © 2013 by DFKI

Legal Notice

Please check the following Legal Notice to understand the rights to use these specifications.


With the advent of WebGL, most web browsers got native 3D graphics support. WebGL gives low-level access to the graphics hardware suitable for graphics experts. But there is no way to describe interactive 3D graphics on a higher abstraction level, using web technologies such as DOM, CSS and Events. 3D-UI, as provided with its reference implementation XML3D, is a proposal for an extension to HTML5 to fill this gap and to provide web developers an easy way to create interactive 3D web applications.

Basic Concepts

3D-UI provides an extension to HTML5 for declarative 3D content represented as a scene graph like structure inside the DOM. All nodes within this graph are also nodes in the web sites DOM tree representation and can be accessed and changed via JavaScript like any other common DOM elements as well. On these DOM nodes, HTML events can be registered similar to known HTML elements. Resources for mesh data can be stored externally in any kind of external format (e.g. JSON, XML or binary) and referenced by URL. 3D-UI is designed to work efficiently with modern GPUs and Graphics API (such as OpenGL/WebGL) but still tries to stay independent of the rendering algorithm.

Complex computations on 3D-UI elements can be defined in a declarative dataflow approach. These computations include for example skinned meshes and key frame animations. In these cases, the current key frame takes the role of a data source in the graph, whereas mesh transformations are sinks in the dataflow. By this, changing the value of a key frame leads to a change in the posture of a mesh, and thus a continuous change of the key frame over time results in an animated mesh. In the reference implementation XML3D, this dataflow is implemented by Xflow

Both XML3D and Xflow are available as PolyFill implementation, i.e. all functionality to interpret XML3D nodes in a website and create a 3D view out of them is provided entirely by JavaScript implementations. This makes using XML3D with Xflow as easy as including the respective script files into the web application.

Scene root Element

The scene root element defines both the root of the scene's scene graph, as well as the actual rendering area in which this scene will be displayed in the browser. It can be placed in an arbitrary point inside the body element of an HTML page. In addition, it can also be used inside XML documents for external resources connected to the HTML page. Inside XML documents, the standard XML namespace rules apply to indicate which elements describe 3D-UI content.

  • Rendering area: The root element defines the dimension and the background appearance. Dimensions can either be defined by the attributes height and width, or - if those attributes are not given - the element can be sized arbitrarily by stype properties via the inline style attribute. The background of the rendering area can be defined using the CSS2 Background properties. The initial value of the background is transparent.
  • Scene graph and transformation: The root element is also the root node of the scene's graph. It defines the world coordinate system of the scene. All transformations defined by group child nodes are local coordinate systems relative to the world coordinate system.
  • Views: The initial view to the scene is defined by a reference to the active view, as defined by the activeView attribute within the root element. If no activeView is defined, the renderer should set the activeView reference to the first applicable child node of the xml3d element. If the reference is not valid, or if there is no applicable view node as child node of the root element, the renderer will not render the scene. In this case the rendering area should be filled with an error image. A script can set and change the activeView reference during runtime. If the activeView reference changes or gets deleted, the rules above are applied again.

Data Nodes

Data nodes combine multiple named value elements and can be referred and contained by data containers.

The elements data, mesh, shader, and lightshader are data containers that combine all contained value elements (int, float, float2, float3, float4, float4x4, bool, and texture) into a data table - a map with the name attribute of the value element as a unique key and the content of the value element as value. Value elements can be direct children of the data container or part of another data element that is either a child of the data container or referred via the src attribute.

In case multiple value elements with the same name are part of a data container, only one key-value-pair is included into the resulting named data table, according to the following rules:

  • If the data container refers to a data element via src, all child elements are ignored and the data table of the referred data element is reused directly
  • A name-value pair of a child value element overrides a name-value pair with the same name of a child data element
  • A name-value pair of a later child value element overrides a name-value pair with the same name of a former child value element
  • A name-value pair of a later child data element overrides a name-value pair with the same name of a former child data element

Grouping Elements

A grouping element is a node that combines several scene elements (meshes, lights, views ... ) to a group with transformation capabilities and surface shader assignment.

  • Transformation: The group element defines a coordinate system for its children that is relative to the coordinate systems of its ancestors. The elements local coordinate system is defined as a 4x4 matrix. This matrix is determined from the CSS 'transform' property as defined in CSS 3D Transforms Module Level 3 and the reference to an element that can provide a transformation via the 'transform' attribute. The local matrix of a group node is calculated as:
  • Surface material: The material attribute of a group element defines the surface shading of all its children. The shading defined by parent nodes is overridden, while following group nodes can override the shading state again.

Mesh Elements

Geometry node that describes the shape of a polyhedral object in 3D.

This is very generic description of a 3D mesh. It clusters a number of data fields and binds them to a certain name. The interpretation of these data fields is job of the currently active shader. Only connectivity information is required to build the primitives defined by the type attribute:

  • Triangles: A float3 element with name index is required. The data type of the bound element has to be evaluable to unsigned int. Every three entries in this field compose one triangle. The number of field entries should be an even multiple of 3. If not, the last entry or the last two entries are ignored. All other fields should have at least as many tuples as the largest value in the index field.

Asset Elements

Grouping node for instantiated geometry. Is not rendered in the created output, but used to predefine complex structured models. Has transformation capabilities and can be provided with shader assignments.

  • Transformation : The asset element defines a coordinate system for its children that is relative to the coordinate systems of its ancestors. See also transformation of Group Elements
  • Surface Shader : The shader attribute of an asset element defines the surface shading of all its children, similar to surface shader for Group Elements

Assetmesh Elements

Geometry node that describes geometry similar to a Mesh Elements in the scope of an asset. Assetmesh geometry is not rendered unless it is instantiated by a Model Element

  • Transformation : Coordinate system of the instantiated geometry relative to the ancestor assets' coordinate systems
  • Surface Shader : The shader attribute defines the surface shader that should be used for the particular assetmesh

Model Elements

Instancing geometry elements that create an instance of a previously defined asset. All Assetmeshes defined within the scopes of the assets that are instantiated by the models are rendered w.r.t. to the accumulated transforms and shading defined by the particular Asset Elements. Model elements can contain the instantiated assets directly as child nodes, or by specifying the asset as source via id reference.

  • Transformation : Coordinate system of the instantiated asset geometry relative to the transformation defined by ancestors

Transform Elements

General geometric transformation element, that allows to define a transformation matrix using five well understandable entities.

The center attribute specifies a translation offset from the origin of the local coordinate system (0,0,0). The rotation attribute specifies a rotation of the coordinate system. The scale field specifies a non-uniform scale of the coordinate system. Scale values may have any value: positive, negative (indicating a reflection), or zero. A value of zero indicates that any child geometry shall not be displayed. The scaleOrientation specifies a rotation of the coordinate system before the scale (to specify scales in arbitrary orientations). The scaleOrientation applies only to the scale operation. The translation field specifies a translation to the coordinate system.

Material Elements

The material element describes a surface material for a geometry.

The material element connects arbitrary shader attributes with some shader code. The shader code is referenced with the script reference. The shader attributes are bound to the shader using the bind mechanism.

The URI syntax is used to define the shader script. This can be either a URL pointing to a script location in- or outside the current resource or a URN pointing to a 3D-UI standard shader.

Lights and Lightshaders

The light element defines a light in the scene graph.

The light source location and orientation is influenced by the scene graph transformation hierarchy. The radiation characteristics of the light source is defined by the referenced lightshader (s. shader attribute). The light can be dimmed using the intensity attribute and can be switched on/off using the visible attribute. If global is set to 'false', the light source will only light the objects that is contained in its parent group or xml3d element. Otherwise it will illuminate all the objects in its scene graph.

The light shader element describes a light source. The lightshader element connects arbitrary light shader attributes with a light shader code. The light shader code is referenced via the script reference. The shader attributes are bound to the shader using the data mechanism.

Texture Elements

Set states on how to sample a texture from an image and to apply to a shape.

The texture source and its dimensions are defined by the texture element's children. The states how to apply the texture is set via the texture element's attributes. Use the attributes to influence

  • the dimensions of the texture (type)
  • how the texture is applied, if texture coordinates fall outside 0.0 and 1.0 (wrapS, wrapT, wrapU)
  • how to apply the texture if the area to be textured has more or fewer pixels than the texture (filterMin, filterMag)
  • how to create minified versions of the texture (filterMip)
  • what border color to use, if one of the wrapping states is set to 'border'

See the attribute documentation for more details.

Note: As per the OpenGL ES spec, a texture will be rendered black if:

  • The width and height of the texture are not power-of-two and
  • The texture wrap mode is not CLAMP_TO_EDGE or
  • filterMin is neither NEAREST nor LINEAR

View Elements

The view node interface represents a camera in 3D world coordinates.

Generic Architecture

Apart from ease-of-use, an important goal of 3D-UI is to provide powerful and modern 3D graphics. Over time, 3D graphics APIs, such as OpenGL, evolved from fixed-function shading to a flexible rendering pipeline, based on programmable shaders combined with arbitrary buffers for data structures. Modern 3D graphics has relied on this flexibility ever since, achieving many visual effects based on custom shader code and data structures.

Many rendering systems support these capabilities via a fixed set of predefined features (such as character animations based one specific certain skeleton format). If users of these systems want to use other features not supported by default, they will have to work with low-level technologies, such as custom shader code.

With the 3D-UI Generic Enabler, we provide the flexibility of modern graphics hardware through a generic, yet high-level architecture. Our goal is to support as many features of modern 3D graphics as possible while avoiding a large number of predefined features as well as reliance on low-level technologies.

A Web application that provides a 3D scene usually already contains a DOM manipulator layer. This layer can be realized by common JavaScript libraries, as for example jQuery, which provide a number of functions to query parts of the DOM and modify their parameters efficiently. We extend the DOM layer by sets of elements that describe the 3D scene. These declarative 3D DOM elements form a scene graph structure, consisting of a number of base types (as for example vectors) to describe the scene graph. As for other DOM elements as well, these elements can be accessed and modified both by existing DOM manipulation libraries and CSS properties.

The client renderer layer accesses existing renderer techniques of a user's web browser, for example WebGL, to interactively display the 3D scene in the client.

Main Interactions

Defining a 3D scene in the web page

3D-UI objects can directly be defined in the website's source using respective tags to define meshes, mesh groups, transformations, shaders (and more). Once 3D-UI is linked to the website, the browser can interpret all tags introduced by 3D-UI. Just adding the root tag to the webpage will create a rendering canvas to display the scene. Meshes and groups of meshes are directly declared within this root tag. Mesh resources and textures can be referenced as known for example from images in conventional web pages. By this, 3D content can be easily added to any webpage, without deeper knowledge of programming 3D applications.

The following example shows how to include a 3D model into a webpage. Data defining the mesh (like for example vertex data and texture coordinates) is stored in an external file and referenced by the 'src' - attribute of the mesh tag. Its transformation is described by CSS-style-transformations, whereas the used shader is declared by additional elements on the same webpage. The example uses the 3D-UI reference implementation XML3D

<xml3d xmlns="http://www.xml3d.org/2009/xml3d" >
  <shader id="orange" script="urn:xml3d:shader:matte">
    <float3 name="diffuseColor" >1 0.5 0</float3>

  <view position="0 0 100" />
  <group shader="#orange" style="transform: translate3d(0px,-20px, 0px)" >
    <mesh src="resource/teapot.xml#mesh" />

Create and modify 3D scenes using JavaScript

Whereas describing a static scene via HTML in advance is a convenient way to quickly create 3D content, it is often not sufficient for complex 3D application. For these, objects are likely to appear or disappear during the runtime of applications, content has to be created with respect to certain user input

Like for other HTML elements as well, also 3D-UI elements can be generated dynamically by JavaScript. A newly created element is rendered as soon as it is added to the 3D-UI scene graph. Shaders and transformations can directly be applied once they are included in the website. The 3D-UI API provides functions to conveniently create new elements in JavaScript, so that they are handled and rendered correctly by the browser.

Existing elements are retrieved from the website's DOM tree by the common JavaScript functions. Not only adding elements, but also changing existing elements' attributes triggers a rendering of the scene, so that newly created elements as well as modifications to existing once are directly visible.

The following example shows how to add a new mesh to an existing 3D-UI scene, using XML3D: First, a new mesh element is created using the respective XML3D function to create new Elements. Using common JavaScript functions, the source file of the mesh vertex data is specified. Creating a group element that will contain the mesh, and adding the group element to the XML3D scene, will immediately render the new mesh.

// Create a new mesh element
var newMesh = XML3D.createElement("mesh");
newMesh.setAttribute("src", "teapot.xml#meshData");

// Create a new group element
var newGroup = XML3D.createElement("group");

// Append a mesh element to the group

// Append the new group element to an existing group

HTML event handler

HTML events are a common technique to interact with website content. To provide an easy way to bring interactivity also to 3D-UI scenes, event handlers like onclick or onmouseover can be registered on XML3D elements.


<group id="teapot" shader="#orange" style="transform: translate3d(0px,-20px, 0px)" onclick = "changeShader();">
  <mesh src="resource/teapot.xml#mesh" />


function changeShader() {
    document.getElementById("teapot").setAttribute("shader", "#green");

Using Camera Controllers

Users usually don't want to be provided with just one look onto a 3D scene, but like to inspect objects from different directions, or, for more complex scenes, navigate through the displayed world. If a standard navigation mode is sufficient for your web application, you can include the camera controller that comes with xml3d.js:

  <script src="http://www.xml3d.org/xml3d/script/xml3d.js"></script>
  <script src="http://www.xml3d.org/xml3d/script/tools/camera.js"></script>

Processing Generic Data with Xflow

XML3D uses Xflow to process any generic data block inside the document. These capabilities are used to efficiently model any kind of expensive computation, e.g. for character animations, image processing and so on. The dataflow is declared in a functional way which allows for implicit parallelization e.g. by integrating the processing into the vertex shader.

Computations of a dataflow can consist of a number of computation steps, performed with predefined xflow operators to for example add or subtract values, morph or interpolate them. Intermediate results of chain of computations can be directly be used in future computation steps.

A dataflow that performs a morphing operation can look like this:

<data compute="position = xflow.morph(position , posAdd2 , weight2)" >
  <data compute="position = xflow.morph(position , posAdd1 , weight1)" >
    <float3 name="position" >1.0 0.04 -0.5 ...</float3 >
    <float3 name="posAdd1" >0.0 1.0 2.0 ...</float3 >
    <float3 name="posAdd2" >1.0 0.0 0.0 ...</float3 >
    <float name="weight1" >0.35 </float >
    <float name="weight2" >0.6</float >
  </data >
</data >

With the <proto> element, we also allow to extract whole dataflow into external documents to make them reusable.

Prototype declaration:

<proto id=" doubleMorph" compute="pos = xflow.morph(pos , posAdd2 , w2)" >
  <data compute="pos = xflow.morph(pos , posAdd1 , w1)" >
    <int name="index" >0 1 2 ...</int>
    <float3 name="pos" >1.0 0.04 -0.5 ...</float3 >
    <float3 name="posAdd1" >0.0 1.0 2.0 ...</float3 >
    <float3 name="posAdd2" >0.0 1.0 2.0 ...</float3 >
    <float param="true" name="w1" ></float >
    <float param="true" name="w2" ></float >
  </data >
</proto >

Prototype instantiation:

<data id="instanceA" proto="# doubleMorph" >
  <float name="w1" >0</float >
  <float name="w2" >0.2</float >
</data >
<data id="instanceB" proto="# doubleMorph" >
  <float name="w1" >0.5</float >
  <float name="w2" >0</float >
</data >

Multiplatform support:

Xflow enables possibility to register operators with a same name to multiple platforms. It's possible to set "platform" attribute can be used to force a data sequence or a data flow to utilize a specific platform (js, gl or cl). If no "platform" attribute is defined, a default Xflow Graph platform will be used instead.

There is also a fallback feature which means for example if WebCL is not available Xflow uses Javascript automatically instead of WebCL acceleration. Currently all nodes in a dataflow chain is affected by fallback meaning that all nodes in dataflow will be either Javascript or WebCL nodes.

Hardware Accelerated Parallel Processing with Xflow

Xflow provides a possibility for effective parallel data processing on CPU or GPU device by utilising WebCL. This is especially useful if developers are using Xflow for processing big datasets. For smaller datasets the benefits are not so clearly visible.

Declaring a WebCL based Dataflow does not differ from ordinary Xflow declaration. If WebCL is available on users computer (by utilising Nokia's WebCL plugin on FireFox) WebCL-based data processing is automatically utilised by Xflow. Developers can also force the dataflow to utilise the WebCL platform by setting the optional "platform" attribute value to "cl".

Below is an example of how to declare a WebCL based dataflow and how to force the processing platform to be WebCL.


<dataflow id="blurImage" platform="cl">
     blur =  xflow.blurImage(image, 9);

However, in order to make the WebCL based data processing to work, a WebCL Xflow operator needs to be registered in a separate JavaScript script.

Below is an example of registering a WebCL Xflow operator. This operator applies a blur effect on the input "image" texture parameter and outputs the processed "result" texture.

Javascript: Xflow.registerOperator("xflow.blurImage", {

 outputs: [
   {type: 'texture', name: 'result', sizeof: 'image'}
 params: [
    {type: 'texture', source: 'image'},
    {type: 'int', source: 'blurSize'}
 platform: Xflow.PLATFORM.CL,
 evaluate: [
    "const float m[9] = {0.05f, 0.09f, 0.12f, 0.15f, 0.16f, 0.15f, 0.12f, 0.09f, 0.05f};",
    "float3 sum = {0.0f, 0.0f, 0.0f};",
    "uchar3 resultSum;",
    "int currentCoord;",
    "for(int j = 0; j < 9; j++) {",
      "currentCoord = convert_int(image_i - (4-j)*blurSize);",
      "if(currentCoord >= 0 || currentCoord <= image_width * image_height) {",
        "sum.x += convert_float_rte(image[currentCoord].x) * m[j];",
        "sum.y += convert_float_rte(image[currentCoord].y) * m[j];",
        "sum.z += convert_float_rte(image[currentCoord].z) * m[j];",
    "resultSum = convert_uchar3_rte(sum);",
    "result[image_i] = (uchar4)(resultSum.x, resultSum.y, resultSum.z, 255);",

The WebCL Xflow operator is designed in a way that allows a developer to focus purely on the core WebCL kernel programming logic. Developers can write their WebCL kernel code in the "evaluate" attribute of the operator, like shown in the example above. The code that can be written there is based in C language and the methods defined in the WebCL specification can be freely utilised. However, no kernel function headers or input/output parameters need to be defined as they are created automatically by the underlying Xflow architecture.

Xflow processes "outputs" and "params" of the Xflow operator and allows them to be directly used in the WebCL kernel code. As seen in the example above, the input parameter "image" can be directly used in the code. An iterator for the first input parameter is also automatically generated and it can be safely used in the code. For the "image" param the iterator variable is named as "image_i". Also, some helper variables such as "image_height" and "image_width" are generated and likewise, they can be used in the evaluate code. Only the texture type parameters have height and width helper variable because textures or images are a special case; they are two-dimensional data stored in one-dimensional buffer. All other input parameter types have a "length" helper variable e.g. "parameterName_length" that determines the length of the input buffer.

Additionally, all WebCL application code needed for executing the WebCL kernel code (such as passing WebCL kernel arguments to the WebCL program and defining proper WebCl workgroup sizes) is generated automatically. Thus, developers need no deep knowledge of the WebCL programming and basic programming skills are enough to produce kernel code for simple WebCL Xflow operators.

Below is an example of a very simple WebCL Xflow operator. This operator is used for grayscaling an input texture. Only three lines of kernel code is required.

Xflow.registerOperator("xflow.desaturateImage", {

 outputs: [
   {type: 'texture', name: 'result', sizeof: 'image'}
 params: [
   {type: 'texture', source: 'image'}
 platform: Xflow.PLATFORM.CL,
    evaluate: [
       "uchar4 color = image[image_i];",
       "uchar lum = (uchar)(0.30f * color.x + 0.59f * color.y + 0.11f * color.z);",
       "result[image_i] = (uchar4)(lum, lum, lum, 255);"

Mapping Synchronization GE data to 3D-UI objects

To integrate 3D-UI with network synchronization, an abstract scene model is used in the client core. It is implemented as a Javascript library and also provides the JS API for application developers to create and manipulate the scene. It uses the Entity-Component model (EC model for short), which is also used in the network protocol and on the server when implementing networked multi-user applications.

Objects in the EC model can be mapped directly to XML3D objects which allows an easy integration of 3D UI GE with the Synchronization GE. The API to create, remove and listen for changes in the scene is documented in the Synchronization GE docs.

  • EC model entities are mapped to XML3D <group>
  • Scene objects like meshes or lights are mapped directly from the corresponding EC model component to an XML3D object, e.g.:
  • A single XML3D element without an encapsulating <group> is also a EC model entity with the corresponding component. That is, <mesh> is same as <group><mesh/></group>: entity.mesh.
  • Transformations that are described in a reX EC Placeable component are mapped to XML3D transforms and referenced by the corresponding entity:
   <transform id="t">
   <group transform ="#t">
  • Camera components are mapped to XML3D <view> elements:
  • Hierarchies of entities in EC model are mapped to hierarchical <group> trees (nested tags) in XML3D
  • XML3D element attributes, for example <mesh src=x>, are represented in the scene model by the corresponding EC model attributes: entity.mesh.meshRef.x:
 entity.mesh.meshRef = 'my.mesh':
   <mesh src="my.mesh" />

Harmonizing the attribute names to have them the same everywhere is under consideration.

  • XML element attributes that are unknown to the EC model vocabulary are mapped directly:
 entity.myattr = 1:
   <group myattr="1">

Basic Design Principles

  • Data definitions that are not subject to change during the application runtime, such as static objects, or local transformations within the scene graph of a complex object, should be stored externally and be referenced using external references
  • The performance of declarative 3D scenes is mainly influenced by the size of the DOM tree that is used to represent the scene. To improve performance, geometry should be grouped in a way that allows to keep the DOM tree as small as possible. An efficient way of grouping geometry is for example to group all meshes that share the same shader.


Detailed Specifications

The detailed specification is provided here:

Re-utilised Technologies/Specifications

3D-UI uses WebGL to render 3D graphics in the browser. Events and interactions with the 3D scene are implemented using HTML5 and JavaScript with jQuery.

Terms and definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will be help to carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP). For a summary of terms and definitions managed at overall FI-WARE level, please refer to FIWARE Global Terms and Definitions

Annotations refer to non-functional descriptions that are added to declaration of native types, to IDL interface definition, or through global annotations at deployment time. The can be used to express security requirements (e.g. "this string is a password and should be handled according the security policy defined for password"), QoS parameters (e.g. max. latency), or others.
AR → Augmented Reality
Augmented Reality (AR)
Augmented Reality (AR) refers to the real-time enhancement of images of the real world with additional information. This can reach from the rough placement of 2D labels in the image to the perfectly registered display of virtual objects in a scene that are photo-realistically rendered in the context of the real scene (e.g. with respect to lighting and camera noise).
IDL → Interface Definition Language
Interface Definition Language
Interface Definition Language refers to the specification of interfaces or services. They contain the description of types and function interfaces that use these types for input and output parameters as well as return types. Different types of IDL are being used including CORBA IDL, Thrift IDL, Web Service Description Language (WSDL, for Web Services using SOAP), Web Application Description Language (WADL, for RESTful services), and others.
Middleware is a software library that (ideally) handles all network related functionality for an application. This includes the setup of connection between peers, transformation of user data into a common network format and back, handling of security and QoS requirements.
PoI → Point of Interest
Point of Interest (PoI)
Point of Interest refers to the description of a certain point or 2D/3D region in space. It defines its location, attaches meta data to it, and defines a coordinate system relative to which additional coordinate systems, AR marker, or 3D objects can be placed.
Quality of Service (QoS)
Quality of Service refers to property of a communication channel that are non-functional, such a robustness, guaranteed bandwidth, maximum latency, jitter, and many more.
Real-Virtual Interaction
Real-Virtual Interaction refers to Augmented Reality setup that additionally allow users to interact with real-world objects through virtual proxies in the scene that monitor and visualize the state in the real-world and that can use services to change the state of the real world (e.g. switch lights on an off via a virtual button the the 3D scene).
A Scene refers to a collection of objects, which are be identified by type (e.g. a 3D mesh object, a physics simulation rigid body, or a script object.) These objects contain typed and named data values (composed of basic types such as integers, floating point numbers and strings) which are referred to as attributes. Scene objects can form a hierarchic (parent-child) structure. A HTML DOM document is one way to represent and store a scene.
Security is a property of an IT system that ensures confidentiality, integrity, and availability of data within the system or during communication over networks. In the context of middleware, it refers to the ability of the middleware to guarantee such properties for the communication channel according to suitably expressed requirements needed and guarantees offer by an application.
Security Policy
Security Policy refers to rules that need to be fulfilled before a network connection is established or for data to be transferred. It can for example express statements about the identity of communication partners, properties assigned to them, the confidentiality measures to be applied to data elements of a communication channel, and others.
Synchronization is the act of transmitting over a network protocol the changes in a scene to participants so that they share a common, real-time perception of the scene. This is crucial to implementing multi-user virtual worlds.
Type Description
Type Description in the context of the AMi middleware refers to the internal description of native data types or the interfaces described by an IDL. It contains data such as the name of a variable, its data type, the hierarchical relations between types (e.g. structs and arrays), its memory offset and alignment within another data type, and others. Type Description are used to generate the mapping of native data types to the data that needs to be transmitted by the middleware.
Virtual Character
Virtual Character is a 3D object, typically composed of triangle mesh geometry, that can be moved and animated and can represent a user's presence (avatar) in a virtual world. Typically supported forms of animation include skeletal animation (where a hierarchy of "bones" or "joints" controls the deformation of the triangle mesh object) and vertex morph animation, where the vertices of the triangle mesh are directly manipulated. Virtual character systems may support composing the character from several mesh parts, for example separate upper body, lower body and head parts, to allow better customization possibilities.
WebGL → (Web Graphics Library) is a JavaScript API for rendering 3D and 2D computer graphics in web browser.
Personal tools
Create a book