We use proprietary and third party's cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
FI-WARE Cloud Hosting - FIWARE Forge Wiki

FI-WARE Cloud Hosting

From FIWARE Forge Wiki

Revision as of 12:25, 4 October 2011 by Jhierro (Talk | contribs)
Jump to: navigation, search


Contents

Overview

Cloud computing is nowadays a reality. Cloud hosting companies, which can be considered as a particular type of FI-WARE Instance Providers, are already delivering on the promise of the Cloud paradigm. They own and manage large IT infrastructures and offer their use as a service on a pay-as-you-go model.

Cloud hosting is particularly appealing to SMEs and start-ups wanting to offer some new and innovative service over the Internet. Actually, it offers SMEs general purpose computing resources that they can consume (and pay) according to their needs and capabilities, e.g. they can start small and grow as the service they offer becomes successful. All this is achievable without the need for large initial investment in the infrastructure. This in turn gives the SMEs a possibility to competitively price their offerings since there is no need to recover a huge initial capital investment in infrastructure and, in addition, the on-going operational expenses are lowered thanks to the pay-as-you-go model.

Today, there are two clear trends in the cloud computing market:

  1. growing adoption of the full cloud computing paradigm, as exemplified by public clouds; and,
  2. the appearance of private clouds, i.e., the adoption of the cloud ideas and technologies internally within companies. The latter approach is especially appealing for large companies that are already operating large data center infrastructures. On one hand, they are still reluctant to fully adopt the cloud hosting model and rely solely on external providers for their IT needs (due to various factors such as security and privacy as well as performance and availability guarantees). On the other hand they do want to benefit from advantages that cloud computing paradigm introduces in terms of cost and flexibility. Such a trade-off also introduces a hybrid approach where private clouds incorporate facilities to burst workload on public clouds (cloudbursting), This approach is not only fundamental for large companies but is increasingly gaining momentum among SMEs who need to gain the necessary confidence on the Cloud promise prior the full outsourcing of their computing infrastructures.

However, as the IT infrastructure moves from being owned and managed by the service providers to being hosted on the cloud, the cloud hosting companies become a critical part of their customers’ businesses This creates a dependency relationship that could even lead to unhealthy and undesirable situations such as vendor lock-in, if the necessary safeguards in terms of technology, market offerings and warranties are not in place.

Moreover, the cloud hosting market is still limited to a few, very dominant, large companies with proprietary solutions. The lack of a competitive and open market for cloud hosting providers, in turn, slows down the adoption of the cloud paradigm and the economic benefits embodied in it. For the success of the Internet-based service economy it is crucial that cloud hosting does not become a market limited to a few strong players, and that future cloud hosting is based on open standards and support interoperability and portability.

The FI-WARE project focuses a great part of its efforts on making sure that these standards materialise and in facilitating their adoption by providing open specifications and reference implementations. This standards-based and open approach will cover the fundamental technologies on which the cloud paradigm is based, such a virtualization, as well as new emerging technologies that will differentiate FI-WARE also in terms of the functionality offered.

In the cloud hosting paradigm there are two main players: (1) cloud hosting providers, i.e., FI-WARE Instance Providers that own physical infrastructure and use it to host compute processes or applications; and (2) cloud hosting users, i.e., organizations or individuals that own the compute processes or applications but do not own (or do not want to own) the physical infrastructure to run them, hence, they lease this infrastructure from cloud hosting providers. In a highly distributed cloud model, the physical infrastructure is deployed very close to the end-user and can be controlled by the network and platform provider or the end-user directly.

According to the needs of clients there are three well-defined Cloud Service offerings [NIST]:

  • Infrastructure as a Service (IaaS): in this model the client rents raw compute resources such as storage, servers or network, or some combination of them. These resources become available on the Internet (public IaaS Clouds) or on the Intranets (private IaaS Cloud) with capacities (storage space, CPUs, bandwith) that scale up or down adapted to real demand of applications that uses them. The advantage of this model is that enables users to setup their own personalized runtime system architecture. However it allows this at the price of still requiring system admin skills by the user..
  • Platform as a Service (PaaS): in this model the clients, typically application developers, follow a specific programming model and a standard set of technologies to develop applications and/or application components and then deploy them in a virtual application container or set of virtual application containers they rent. How these virtual application containers map into a concrete runtime architecture is hidden to the application develper who doesn’t need to have strong admin skills. However, this happens at the price of loosing part of the control on how the system runtime architecture is designed. This model enables fast development and deployment of new applications and components. Usually, in addition to hosting, the PaaS providers also offer programming utilities and libraries that expedite development and encourage reuse.
  • Software as a Service (SaaS): in this model the client, typically end-users, rent the use of a particular hosted Final Application, e.g., word processing or CRM without needing to install and executed the application on top of equipments owned by the client (consumers) or assigned to the client (employees in a company). Applications delivered following a SaaS model are always available so that clients get rid of maintenance tasks (including upgrading, configuration and management of high-availability and security aspects, etc). Computing resources needed to run applications on owned/assigned clients get minimized since they are hosted on the Internet/Intranet.

While it is possible to implement these models independently of each other, i.e., SaaS without PaaS or IaaS, PaaS without IaaS, the advantages offered by each of these models to its potential users are such that we strongly believe that the vast majority of the Future Internet services will be based on a stacked implementation of these models as shown in The XaaS Stacked Model Figure below.


Image:The_XaaS_stacked_model.jpg
The XaaS stacked model


In our vision, Application Providers willing to offer applications following a SaaS model will typically opt to implement this model using the services of a PaaS or a IaaS Cloud provider. Usage of a PaaS Cloud provider will mostly apply to Application Providers who a) have decided to adopt a defined standard platform for the development of applications and b) wish to focus their skills in programming and application architectural aspects without needing to hire experts who can deal with the design and fine tuning of large system runtime architectures. However, Application Providers may have some special needs that are not properly covered by the PaaS programming model and tools, or wish to be able to design and configure the system runtime architecture linked to their applications, based on raw computing resources from an IaaS provider. Similarly, PaaS providers may rely on IaaS providers for leasing infrastructure resources on demand. In this context a cloud hosting provider may serve the role of a PaaS Cloud provider, or the role of an IaaS Cloud provider, or both.

The Cloud Hosting chapter in the FI-WARE Reference Architecture will comprise the Generic Enablers that can serve the needs of companies that may need IaaS Cloud hosting capabilities, PaaS Cloud hosting capabilities or both, meeting the requirements for the provision of a cost-efficient, fast, reliable, and secure computing infrastructure “as a Service”.

The basic principle to achieve a cost-efficient infrastructure is the ability to share the physical resources among the different users, but sharing needs to be done in a way that ensures isolation (access, control and performance) between these users. These seemingly contradictory requirements can be met by an extensive use of virtualisation technology.

Virtualization capabilities are the cornerstone of any IaaS Cloud Hosting offering because they enable both high utilization and secure sharing of physical resources, and create a very flexible environment where logical computation processes are separated and independent from the physical infrastructure. FI-WARE’s base functionalities will include a virtualization layer that will enable secure sharing of physical resources through partitioning, support migration without limitations, and provide a holistic system-wide view and control of the infrastructure. Basic management of the resulting virtualised infrastructure will automate the lifecycle of any type of resource by providing dynamic provisioning and de-provisioning of physical resources, pool management, provisioning, migration and de-provisioning of virtual resources, on-going management of virtual capacity, monitoring etc.

Virtualisation technologies, such as hypervisors or OS containers, enable partitioning of a physical resource into virtual resources that are functionally equivalent to the physical resource. Moreover, virtualisation creates a very flexible environment in which logical functions are separated from the physical resources. IaaS Cloud hosting providers can leverage this capability to further enhance their business. For example live-migration of virtual resources, i.e., the capability of moving the virtual resource from one physical resource to another while the virtual resource remains functional; enable the cloud hosting providers to optimize the resource utilization. However, running different workloads on a shared infrastructure, hosted by a 3rd party, introduces new challenges related to security and trust. FI-WARE will address these challenges by leveraging generic enablers defined in the FI-WARE Security chapter.

In addition to virtualisation and the management of it, cloud hosting providers need a layer of generic enablers that deal with the business aspects of optimally running their operation. Existing IaaS Cloud Hosting technologies and commercial offerings represent a big step forward in terms of facilitating management of compute infrastructure by completely virtualising the physical resources used by software, but still do not fully address all the needs of both IaaS Cloud Hosting Providers and Application and Service Providers. IaaS Cloud Hosting Providers need grouping and elasticity, policy-driven data centre optimisation and placement, billing and accounting, more control over virtualised Network Resources. Application and Service Providers need the infrastructure management decisions to be directly driven by Service Level indicators and not compute parameters as is the case today.

Typically existing IaaS Cloud Hosting solutions are based on a centralised infrastructure deployed usually on a few data centres distributed geographically. However, some Future Internet applications may require reduced latency and high bandwidth that this approach and current network realities cannot always meet. This becomes especially problematic when the users of the hosted applications and services are using their home broadband connections. Stricter privacy requirements that favour local-only storage of data may be an additional obstacle to the current approach, as it would place data even further away from the computational infrastructure. To address these challenges, FI-WARE will explore the possibility to extend the reach of the IaaS Cloud Hosting infrastructure to the edge of the networks by incorporating a device located at the home of an end user, the Cloud Proxy that can host part of the virtualised resources, applications and data, thereby keeping data closer to the user.

Application Providers may rent from IaaS Cloud providers dynamic infrastructure resources to deploy service components, but they are on their own in terms of coming up with the deployment architecture, managing and deploying enabling SW components, managing and maintaining the software stacks installed on each virtual machine and controlling the scalability of the virtualised infrastructure resources. FI-WARE will build on top of robust virtualisation-based IaaS technologies to create a Platform as a Service offering that provides a higher level of abstraction for service provisioning where the platform itself provides development tools, application containers, integrated technologies (libraries, APIs, utilities, etc.) and automatic scalability tools, allowing the Application Providers to deploy applications by means of providing just the description of their Application Components. The delivery of standard interfaces and reference implementations for the above elements are both in the scope of the FI-WARE.

In order to simplify management of hosted resources FI-WARE will provide a self-service portal where Application and Service Providers will be able to select, configure, deploy and monitor their whole applications and services through graphical tools. Application Blueprints and Service Level Agreements will be used by Cloud Hosting providers to drive automatic provisioning and dynamic management of the virtualized resources.

Trust and consequentially security concerns are one of the top obstacles that hinder Cloud Computing adoption today. FI-WARE will work towards embedding security, privacy and isolation warranties, which can be achieved through use of standard security techniques (authentication, authorization, encryption, etc) and partitioning technologies that warranty isolation, to all layers of its Cloud Hosting platform.

Cloud Hosting will be an integral part of the FI-WARE platform and together with the Apps/Services Ecosystem, Data/Context Management Services, Internet of Things Service Enable and Interfaces to the Network and Devices will offer a complete solution for: application development that automatically resolves hosting, deployment and scalability, provides the necessary interfaces and services so that applications can leverage the Internet of Things, provide intelligent connectivity all through the stack to guarantee QoS, resolve common needs like data storage and analysis, access to context and monetization, allow the delivery of applications through a rich ecosystem that enables the implementation of flexible business models and allows user driven process creation and personalization.

Summarizing the above:

Building upon existing virtualization technologies, FI-WARE will deliver a next generation Cloud Stack that will be open, scalable, resilient, standardised, and secure, and will enable Future Internet applications by providing service-driven IaaS and PaaS functionalities and extending the reach of the cloud infrastructure to the edge of the networks, much closer toendl users.


To better illustrate the FI-WARE proposition for Cloud Hosting let us take a look at a typical scenario for a cloud hosting company.

A start-up company has an idea for an innovative application to be offered as Software as Service (SaaS). They can calculate pretty accurately how many servers and how much storage they will need to support a target number of end-users of their application, but giving that this is a totally new application they cannot estimate whether they will reach this number or surpass it. To reduce the risk involved with a big initial investment in equipment, they decide to lease resources on demand from a cloud hosting provider. They require of the cloud hosting provider to offer unlimited, on-demand and automated growth, i.e., they will start with a minimum set of resources, and the provider commits to automatically add more resources when the service reaches a particular load, and when the load decreases, the additional resources will be released again automatically. They also need from the cloud hosting provider isolation and availability guarantees and they want the flexibility to painlessly switch providers in case of breach of contract or a provider going out of business. Finally they would prefer a provider that can give them raw compute resources for those tasks unique to their application, and also supports composition and hosting of commonly used application components, for example a web-based user interface to their application.

FI-WARE offers to Cloud Hosting companies the tools needed to answer these requirements from their potential customers, in this case the start-up company in need of a flexible on-demand infrastructure.


The following figure illustrates the Reference Architecture for the Cloud Hosting chapter in FI-WARE, each box representing one of the Generic Enablers (GEs), which would be part of it.


Image:Cloud_Hosting_Reference_Architecture.jpg
Cloud Hosting Reference Architecture


Herein we provide a brief description of the role each GE plays and their major interfaces with other GEs:

  • IaaS Data Center Resource Management – this GE provides VM hosting capabilities to the user, and handles everything related to individual VMs and their resources – including compute, memory, network and block storage. This includes provisioning and life cycle management, capacity management and admission control, resource allocation and QoS management, placement optimization, etc.
  • IaaS Cloud-Edge Resource Management – this GE allows the application developer to design and deploy the application so that it can leverage resources located at the cloud edge, close to the end-user.
  • IaaS Service Management – this GE provides hosting of compound VM-based services, including their definition and composition, deployment and life cycle management, as well as monitoring and elasticity. This GE uses the IaaS Resource Management GE to handle individual VMs and their resources. This GE is also able to communicate with other clouds, in scenarios of cloud federations.
  • PaaS Management – this GE provides hosting of application containers, such as Web container, database instance, etc. It leverages IaaS underneath, to automate the lifecycle of the underlying infrastructure and OS stack.
  • Object Storage – this GE provides the capabilities to store and retrieve storage objects accompanied by metadata.
  • Monitoring – this GE will be responsible for collecting metrics and usage data of the various resources in the cloud.
  • CMDB – this GE will be responsible for storing the operational configuration of the Cloud environment, used by the various other GEs. Due to scalability requirements, it is likely to be implemented as a distributed service.
  • Data Warehouse – this GE will be responsible for storing the historical data of the different metrics and resource usage data of the Cloud environment, collected by the monitoring & metering GE and consumed by the SLO management GE (to monitor SLO compliance), as well as by the billing GE.
  • Metering & Accounting – this GE will be responsible for collecting and processing the data related to usage and monetization of cloud services (via an external Billing system, which is not part of this GE).

Each GE is described in more detail in Section 3.2.

Last but not least, there are two main users of Cloud Hosting GEs:

  • Cloud hosting provider: uses the provided capabilities to build a hosting offering, and to perform ongoing administration tasks
  • Cloud hosting user: e.g., a Application/Service providers who uses the provided platform to develop and/or test and/or deploy their applications.

Self-service interfaces will be provided so that different types of users would be able to interact with the entire the FI-WARE cloud infrastructure in a common but unified fashion. It should adapt to different user mental models in order that it is easy to use. Applying techniques common in “Web 2.0” can also help to make it more usable. It is foreseen that there will be different kinds of users with different levels of expertise and adaptation to their expectations and needs should be a goal.

Our objective is that using this infrastructure is a positive, simple and easy experience for all the FI-WARE and other Future Internet users. This will be a key requirement regarding self-service interfaces. Different users have varying requirements in how the interact with Information Technology devices and services. As a result we foresee different types of support to satisfy these requirements. Amongst those are:

  • A portal,
  • A high-level toolkit that may be integrated with management or development tools and
  • Scripts to automate the task are required.

Direct access to the underlying APIs will also be offered should the support listed above be insufficient.


Generic Enablers

IaaS DataCenter Resource Management

Target usage

The IaaS DataCenter Resource Management GE provides the basic Virtual Machine (VM) hosting capabilities, as well as management of the corresponding resources within the DataCenter that hosts a particular FI-WARE Cloud Instance.

The main capabilities provided for a cloud hosting user are:

  • Browse VM template catalogue and provision a VM with a specified virtual machine image
  • Manage life cycle of the provisioned VM
  • Manage network and storage of the VM
  • Resource monitoring of the VM
  • Resiliency of the persistent data associated with the VM
  • Manage resource allocation (with guarantees) for individual VMs and groups of VMs
  • Secure access to the VM

For a cloud hosting provider, the following capabilities are provided:

  • Resource optimization and over-commit (aimed at increasing the utilization and decreasing the hardware cost)
  • Capacity management and admission control (allowing to easily monitor and control the capacity and the utilization of the infrastructure)
  • Multi-tenancy (support isolation between VMs of different accounts)
  • Automation of typical admin tasks (aimed at decreasing the admin cost)
  • Resiliency of the infrastructure and of the management stack (aimed at reducing outage due to hardware failures)

GE description

In order to achieve scalability, the infrastructure is managed in a hierarchical manner, as shown in Figure 3 At the top, the DataCenter-wide Resource Manager (DCRM) is responsible for surfacing the functions and capabilities required for the provision and life-cycle management of VMs and associated resources, as specified above. At the bottom, the Node Manager is responsible for managing the resources provided by individual physical nodes. In between, a number of System Pools may be defined, typically encapsulating homogenous and physically co-located pools of resources (compute, storage, and network). Each system pool has some self-management capabilities, provided by System Pool Resource Manager (SPRM) which exposes to DCRM an abstracted view of its resources, as if it was a 'mega-node', while delegating the operations on individual resources to the next-level SPRM (if there are multiple levels of System Pools), or to the corresponding Node Manager.


Image:GE_Architecture_Data_Center_Resource_Management.jpg
GE Architecture: Data Center Resource Management


Across the three management layers (node, pool, data center) and three resource types (compute, storage, network), the following resource management functions and capabilities are provided:

  • Request orchestration and dispatching
  • Discovery & inventory
  • Provisioning and life cycle management
  • Capacity management & admission control
  • Placement optimization
  • QoS management & resource allocation guarantees
  • Resource reservation and over-commit
  • Monitoring & metering
  • Isolation and security
  • Resiliency

For flexibility, the RMs at the different levels of the hierarchy will use unified interfaces to communicate between them. This interface will include a core resource model which can be extended for the needs of each resource type to be managed. Each RM could publish a profile that details its specific resource management capabilities. Given the importance of interoperability, especially within the context of Generic Enablers, the API and related model should aim to be based on existing open, IPR-unencumbered work. The Open Cloud Computing Interface (OCCI) will be used in FI-WARE for this purpose (see below).

For resiliency, the individual RMs will share a common group communication fabric, enabling efficient messaging as well as high availability and fault tolerance of the individual management components (by implementing heartbeat and failover to a standby node).

One of the unique capabilities that FI-WARE is aimed at is providing resources allocation guarantees specified via Resource Allocation Service Level Objectives (RA-SLOs), enforced by this GE. See more details below.

In order to achieve high resource utilization, the RMs will apply intelligent placement optimization and resource over-commit. This task is especially challenging when applied in conjunction with support for performance and RA-SLOs (mentioned above), and requires significant innovation.

Open Cloud Computing Interface

OCCI is a RESTful protocol and API for the management of cloud service resources. It comprises a set of open community-lead specifications delivered through the Open Grid Forum. OCCI was originally initiated to create a remote management API for IaaS model based Services. It has since evolved into a flexible API with a strong focus on integration, portability, interoperability and innovation while still offering a high degree of extensibility.

OCCI aims to leverage existing SDO specifications and integrate those such that where a OCCI specified feature may not be rich enough a more capable one can be brought into play. An excellent example of this is the integration of both CDMI and OVF. In particular to those 2 previously mentioned standards, when combined together provide a profile for open and interoperable infrastructural cloud services [OCCI_OVF_CDMI].

The main design foci of OCCI are:

  • Flexibility: enabling a dynamic, adaptable model,
  • Simplicity: do not mandate a large number of requirements for compliance with the specification. Look to provide the lowest common denominator in terms of features and then allow providers supply their own differentiating features that are discoverable and compliant with the OCCI core model,
  • Extensibility: enable providers to specify and expose their own service features that are discoverable and commonly understood (via core model).

The specification itself currently comprises of 3 modular parts:

  • Core [OCCI_CORE]: This specifies the basic types and presents them through a meta-model. It is this specification that dictates the common functionality and behaviour that all specialisations of it must respect. It specifies how extensions may be defined.
  • Infrastructure [OCCI_INFRA]: This specification is an extension of Core (provides a good example of how other parties can create extensions). It defines the types necessary to provide the a basic infrastructure as a service offering.
  • HTTP Rendering [OCCI_HTTP]: this document specifies how the OCCI model is communicated both semantically and syntactically using the RESTful architectural-style.


From an architectural point of view OCCI sits on the boundary of a service provider (figure below). It does not seek to replace the proprietary protocols/APIs that a service provider may have as legacy.


Image:OCCI_interface.jpg
OCCI interface


The main capabilities of OCCI are:

  • Definitions (attributes, actions, relationships) of basic types:
  • Compute: defines an entity that processes data, typically implemented as a virtual machine.
  • Storage: defines an entity that stores information and data, typically block-level devices, implemented with technologies like iSCSI and AoE.
  • Network: defines both client (network interface) and service (L2/L3 switch) networking entities, typically implemented with software defined networking frameworks.
  • Discovery system - Types and their instances' URL schema (provider can dictate their own) is discovered. Extensions are also discoverable through this system.
  • Extension Mechanism allows service providers expose their differentiating features. Those features are comprehended by clients through the discovery system.
  • Resource (REST) handling (CRUD) of individual and groups of resource instances
  • Tagging & Grouping of Resources
  • Dynamic Composition that allows for the runtime addition of new attributes and functional capabilities
  • Template support for both operating systems and resource types
  • Independent of provisioning system

The current release of the Open Cloud Computing Interface is suitable to serve other models in addition to IaaS, including e.g. PaaS [OCCI_GCDM]. It has wide open source software adoption with many implementations (1) and a number of supporting tools(2). It is has been recommended by the UK G-Cloud initiative(3), is currently in the process of consideration by NIST(4) in the US and also supported by the SIENA(5) and EGI(6) initiatives here in the European Union. OCCI has also been contributed to significantly by EU FP7 projects including RESERVOIR and SLA@SOI. Forth-coming extension to the specification include those that expose monitoring(7) and SLA capabilities(8).

  1. - http://www.occi-wg.org/community/implementations
  2. - http://www.occi-wg.org/community/tools
  3. - http://occi-wg.org/2011/02/21/occi-and-the-uk-government-cloud/
  4. - http://www.nist.gov/itl/cloud/sajacc.cfm
  5. - http://www.sienainitiative.eu/
  6. - http://www.egi.eu
  7. - http://www.iolanes.eu
  8. - http://en.wikipedia.org/wiki/D-Grid


Resource Allocation SLOs

SLOs are technical clauses in legally bounding documents called Service Level Agreements (SLAs), specifying the terms and conditions of service provisioning. SLOs specify non-functional guarantees of the cloud provider with respect to the virtualized workloads and resources, which the cloud provider offers to its users.

Enterprise grade SLO compliance is one of the critical features that are insufficiently addressed by the current cloud offerings [Reservoir-Computer2011, Reservoir-Architecture2009, SLA@SOI-Architecture10].

In this GE, we focus on resource allocation SLOs (RA-SLOs), which guarantee the actual resource allocation according to the nominal capacity of VM instances. They are useful both in the context of elastic and non-elastic services. In particular, RA-SLO specifies that a VM is guaranteed to receive all its resources according to the specification of the VM's instance type with probability p throughout a single billing period. It should be noted that today this mechanism does not exist as part of public cloud offerings.

RA-SLO guarantees are orthogonal to those of the up-time SLOs. Thus, if an up-time SLO guarantees 95.5% of compliance within a single billing period, then RA-SLO guarantees that throughout this time resource allocation is in accordance to the nominal capacity specification with the percentile of compliance p1 that may be either greater, less or equal to the up-time percentile of compliance p.

Providing RA-SLO guarantees is especially challenging in conjunction with the natural desire of the FI-WARE Cloud Instance Provider to minimize the hardware cost by over-booking resources. Such an over-booking is typically done by multiplexing existing physical capacity among multiple workloads, and relying on the assumption that different workloads would typically achieve peak capacity demand at different times. To guarantee RA-SLO compliance, the IaaS DataCenter Resource Management GE will execute an admission control mechanism that allows validating feasibility of RA-SLO guarantees over time for a given workload before actually committing to these guarantees under the over-booking model.

While this GE will be responsible for enforcement of RA-SLOs, they will be submitted to the IaaS DataCenter Resource Management GE by the IaaS Service Management GE, described in Section 3.2.2.

Critical product attributes

  • Infrastructure scalability
  • Resiliency of the management stack
  • Enforcement Resource Allocation SLOs (RA-SLOs)
  • Optimized placement of virtual resources on physical nodes ensuring high resource utilization


IaaS Service Management

Target Usage

The IaaS Service Management GE introduces a layer on top of IaaS Resource Manager GEs (both DataCenter and Cloud-edge) in order to provide a higher-level of abstraction to Application/Service providers. Thus, the Service Provider does not have to manage the individual placement of virtual machines, storage and networks on physical resources but deal with the definition of the virtual resources it needs to run an application/service, how these virtual resources relate each other and the elasticity rules that should govern the dynamic deployment or deactivation of virtual resources as well as the dynamic assigned of values to resource parameters linked to virtual resources (CPUs, memory and local storage capacity on VMs, capacity on virtual storage and bandwith and QoS on virtual networks, for example).

GE description

Overview

The IaaS Service Management GE relies on the concept of vApp and Virtual Data Centers (VDC). A vApp comprises:

  • a set of VMs
  • optionally, a set of nested vApps

A Virtual Data Center (VDC) maps to a virtual infrastructure defined by the user to host the execution of application/services. A VDC is made up of a number of vApps, and a set of virtual networks and virtual storage systems. In order to deploy VMs linked to a vApp, we have to provide information about their configuration, needed files to run it, license information, security issues, and so on. In addition to it, the VM can require the specific technology stack installed on top of the Operating System (e.g., DB, Application Server). Finally, the provider can also specify some hardware characteristics (number of CPU, RAM size and characteristics of network interfaces) as well as local storage capacity..

This GE would be the one responsible of the following functions:

  • Deployment Lifecycle Management: it coordinates the workflow that is responsible for the deployment and re-deployment (i.e., on the fly change of the service definition not just VMs) of VDCs and applications/services based on an IaaS Service Manifest. It also maintains the service configuration status at runtime, allowing the application of different optimization policies for deployment and scalability control.
  • Dynamic Scalability: it will automatically control the elasticity of services by scaling up/down the service in a vertical (adding/removing horsepower such us CPU, memory or bandwidth) or horizontal (adding/removing service nodes replicas) way. Service Scalability would be based on Elasticity Rules defined by the user through the Self-Service Portal/Backend or generated by the Advanced IaaS/PaaS Management GEs (which in turn are configured based on input provided by the user through the Self-service Portal/Backend). To perform this function, this GE will rely on monitoring and accounting data provided by the Monitoring/Accounting GE.
  • Federation and Interoperability: In a private Cloud schema there could be some clusters providing different virtualization technologies or different service quality levels (from best effort to high availability). In a hybrid Cloud schema, local infrastructures are federated to remote (public or private) Clouds to cope with peaks of demand which cannot be satisfied locally. In both cases, this layer will allow to distribute service components among different local virtual Hosting Centers or among local and remote virtual Hosting Centers using functions offered by the IaaS DataCenter Basic Resource Management GE.

The IaaS Service Management GE will offer as "north" interface a standard Cloud Service Management API which will enable to program the creation of the IaaS Service Manifest definitions based on DMTF's OVF (Open Virtualization Format) and gathering of monitoring data from GEs in lower layers of the Cloud Hosting Reference Architecture (IaaS DataCenter and Cloud-edge Resource Management GEs). Usage of this API, together with definition of portable VM images would guarantee portability and interoperability of deployments. That’s why one of the goals related to the IaaS Service Management GE in FI-WARE will have to do with pushing standardization of this API within DMTF’s Cloud Management Working Group (CMWG).

IaaS Service Manifest

The input to the IaaS Service Manager GE is the service manifest, where the application/service provider specifies the features and requirements of the virtual infrastructure hosting the execution of a number of applications/services. The service definition is an XML document containing information such as:

  • Service features (application and features, software properties, etc.).
  • vApps, virtual networks, virtual storage systems and other virtual resources required to deploy the service
  • Hardware requirements to deploy each VM
  • Restrictions and service KPIs for scaling up or down

As FI-WARE works with different sites and with a variety of services, it is required that virtual machines will be portable between sites. It is required to guarantee interoperability in service and virtual machine definitions between the sites. In a cloud environment, one key element of the interaction between Service Providers (SPs) and the infrastructure is the service definition mechanism. In IaaS clouds this is commonly specified by packaging the software stack (operating systems, middleware and service components) into one or more virtual machines hosting an application/service, but one problem commonly mentioned is that each cloud infrastructure provider has its own proprietary mechanism for service definition. This complicates interoperability between clouds and locks a service provider to a particular vendor. Therefore, there is a need for standardizing the service definition in order to avoid vendor lock-in and facilitate interoperability among IaaS clouds.

The Open Virtualization Format (OVF) [OVF 08] is a standard from the DMTF [ServiceManifest 10] which can provide this service manifest. Its objective is to specify a portable packaging mechanism to foster the adoption of Virtual Appliances (vApp) as a new software deploy and management model (e.g. through the development of virtual appliance lifecycle management tools) in a vendor and platform neutral way (i.e., not oriented to any virtual machine technology in particular). OVF is optimized for distribution and automation, enabling streamlined installations of vApps.

Deployment Lifecycle Management

As previously mentioned, the IaaS Service Management GE coordinates the deployment and redeployment (on the fly change of the service definition) workflow and maintains the service configuration status at runtime, allowing the application of different optimization policies for deployment and scalability control.

As commented above, FI-WARE considers the concept of VDC as a set of virtual machine, network and storage support as a whole. The Service manager is in charge of managing the VDC as a whole instead of independent virtual machines and networks. The IaaS Service Manager GE is able to process the IaaS Service Manifest it receives as input and translate it into the right requests to the IaaS DataCenter Resource Management GE, distributed Cloud-edge Resource Managers and even third IaaS Cloud providers. These requests cover the deployment of the different elements that comprise a VDC, as well, as other operations (undeployment and so on). Finally, the IaaS Service Manager GE is in charge of managing the service lifecycle as presented in next figure.


Image:VDC_Service_lifecycle.jpg
VDC Service lifecycle

The typical lifecycle of a VDC service in the Cloud is shown in Figure 5It can be observed how one needs to develop and package appropriately a service for the Cloud [Vaquero et al. 2011], i.e., the transition from our in-house premises to the Cloud is no yet straightforward. After the service is packaged, the next step takes us to the definition of what we want our service to be. In IaaS Clouds, this second step can be skipped as they offer a virtual machine level API that constrains the specification of an application (understood as a set of services or virtual appliances) to a set of detached “commands” for every VM. The next step consists on deploying the service based on requests upon the IaaS DataCenter Resource Manager GE, distributed Cloud-edge Resource Management GEs and third party IaaS Clouds. Having the service deployed leads us to the running phase, the one in which the application will be laying most of the time and that in which and appropriate control on the services' behaviour is most desirable. When the service is running, two prominent processes are identified: 1) monitoring the services status in order to 2) react, by scaling up/down [Cáceres et al. 2010], and keep the promised quality of service and economic performance. At a later stage, the service can be stopped (to be again resumed) or undeployed and destroyed.

Dynamic Scalability

The cloud computing automated provisioning mechanisms can help applications to scale up and down systems in a way that performance and economical concerns are balanced. Scalability can be defined as “the ability of a particular system to fit a problem as the scope of that problem increases (number of elements or objects, growing volumes of work and/or being susceptible to enlargement)” [Cáceres et al. 10]. Under the FI-WARE context, scalability is managed at the service level, i.e., by the IaaS Service Manager GE. The actions to scale may be classified in [Cáceres et al. 10]:

  • Vertical scaling by adding more horsepower (more processors, memory, bandwidth, etc.) to deployed virtual resources. This is the way applications are deployed on large shared-memory servers.
  • Horizontal scaling by adding more virtual resources. For example, in a typical two-layer service, more front-end VMs are added (or released) when the number of users and workload increases (decreases).

The service manager, besides managing the service, has to manage monitoring events and scalability rules. It is responsible for avoiding over/under provisioning and over-costs. The IaaS Service Manager GE protects SLOs specified for services using business rules protection techniques. It provides a means for users to specify their application behaviour in terms of vertically or horizontally scaling virtual resources [Cáceres et al. 10] by means of elasticity rules [Chapman 10]. The elasticity rules follow the Event-Condition-Action approach, where automated actions to resize a specific service component (e.g. increase/decrease allocated memory) or the deployment/undeployment of specific service instances are triggered when certain conditions relating to these monitoring events (KPIs) hold. In certain circumstances, some Resource Allocation - SLOs are passed to the IaaS DataCenter Resource Manager so that it can govern placement decisions.

Regarding the scalability, our solutions is much more flexible, richer and the actions are not only limited to up and down scaling [Vaquero et al. 2011] compared to other solutions like Cloud Formation, Right Scale or Amazon Elastic Beanstalk. Other differentiation aspect from this solution is that it is based on standard solutions and does not use proprietary APIs.

Service Monitoring

Every system needs to incorporate monitoring mechanisms in order be able to constantly check the performance of the system. This is especially true in service clouds that are governed by SLOs, which are measurable technical artefacts derived from the clauses of Service Levels Agreements (SLAs) and so the system needs to be able to constantly check that the performance adheres to the terms contracted. In order to guarantee the SLOs of the SLAs, the service manager offers scalability mechanism to satisfy the customer’s demand. This scalability is driven by service monitoring metrics. The service monitoring metrics can vary from virtual hardware attributes, to Key Performance Indicators (KPI), event to platform-level metrics installed in the virtual machine and also at the service KPI level (this is the case for PaaS platforms)

Federation and Interoperability

In a private Cloud schema there could be some clusters providing different virtualization technologies or different service quality levels. Also, in hybrid Cloud schema, local infrastructures are federated to remote (public or private) Clouds to cope with peaks of demand, which cannot be satisfied locally. In both cases, a service or a virtual machine can be deployed in different Cloud provider in a federation scenario. However, the Cloud providers use proprietary interfaces and data models which introduce problems related to heterogeneity in operations and data. Thus, there is a need for interoperability in order to avoid ad-hoc developments and the increase of the time-to-market. The solution should offer a common access to service providers based on the existence of a common API, since it allows to aggregate different Cloud providers’ APIs. In order to guarantee interoperability, the data model in the aggregator API should be standard.

Cloud Service Management API

The Cloud Service Management API allows to deploy service manifests, or fragments of service manifests, based on the standard Open Virtualization Format (OVF)defined by the DMTF [OVF 08]. This API is a RESTful, resource-oriented API accessed via HTTP, which uses XML-based representations for information interchange. It will be based on the TCloud API [TCloud 10] being proposed at the CMWG of the DMTF, which is based on already consolidated OVF specifications and the vCloud specification [VMWare 09] published by VMware and submitted to the DMTF for consideration. It is the goal that the Cloud Management API evolves to align with specifications approved at the CMWG in DMTF.

The Cloud Service Management API will define a set of operations to perform actions over:

  • Virtual Appliances (VApp), which is a Virtual Machine running on top of a hypervisor,
  • Hw resources the virtual hardware resources that the VApp contains,
  • Network both public and private networks, and
  • Virtual Data Center (VDC) as a set of virtual resources (e.g. networks, computing capacities) which incarnate VApps, which are owned by Organizations (Org) (independent unit).

The Cloud Service Management API will define operations to perform actions over above resources categorized as follows: Self-Provisioning operations to instantiating VApps and VDC resources and Self-Management to manage the instantiated VApps (power on a VApp…). In addition, it provides extensions on monitoring, storage, and so on.

In addition, the Cloud Service Management API will be focused on adding network intelligence, reliability and security features to cloud computing empowered by enhanced telecom network integration [TCloud 10]. Moreover, it will aim to extend current cloud computing models providing more flexibility and control to cloud computing customers. DMTF defines cloud computing as “an approach to delivering IT services that promises to be highly agile and lower costs for consumers, especially up-front costs”. This approach impacts not only the way computing is used but also the technology and processes that are used to construct and manage IT within enterprises and service providers.

Compatibility for the main operations and data types defined in vCloud [VMWare 09] will be maintained in the Cloud Service Management API, but it will provide extensions for advanced Cloud Computing management capabilities including additional shared storage for service data, network element provisioning (different flavors of load balancers and firewalls), monitoring, snapshot management, etc. The following table summarizes main characteristics currently supported in the TCloud API and therefore going to be supported by the Cloud Service Management API

Cloud Service Management API characteristics
Characteristics

Description

Aggregation paradigm

The auto-browser API shows the extension and functionalities provided by the implementation. If there is some support with missing operations, the interface will be implemented and OperationNotSupportedException will be thrown when the client calls non-supported operations.)

API type

The API will be a RESTful API. A Java binding easing programming to this programming will also be provided. Binding to other languages may also be considered.

Support for 3rd party driver integration

The API implementation will include drivers to handling access to the IaaS DataCenter Resource Manager GE and 3rd party GEs exporting OCCI. However, additional drivers (to handle Amazon EC2, for instance) will be easy to add..

There is the intention to support drivers for Amanzon EC2, OpenNebular, Emotive and Flexiscale.

VM type

The data model is based on OVF. It will be possible to define:

  • Virtualization technology (xen, kvm...)
  • The URL where the image is located
  • Virtual hardware parameters (CPU, ram, disk)
  • Network information (public, private)
  • Software configuration information (contextualization information)
  • The definition of the service as a whole (not only VMs)
  • Scalability information
  • Storage information (NAS, SAN...)

VM image

No restriction about supported OS.

It is possible to deploy a VM or attached a image to be deployed.

VM lifecycle

Supported states: deploy, start/stop, reboot, suspend, activate, undeploy

Reconfiguration support (state constraints): to stop a VM and deploy a new one.

Storage

Support for various storage types: NAS, SAN… including integration with CDMI API.

Lifecycle, operations:

  • create/delete
  • attach/detach
  • snapshots

Network

Supports declaration and configuration of resources linked to Public and Private Virtual Networks.

User & account management

The customer concept will be related to the VDC concept. It is possible to add/delete/modify... a VDC

Advanced features

Monitoring

Multi-VM deployment (like OVF)

Local resource accounting

No. It has not information about resources (it is scope of the Cloud Providers)

Support for lengthy operations

The API will support asynchronous interactions providing a task Id.

Polling and pushing mechanisms are also provided


Critical product attributes

  • Management of the service as a whole (considering virtual machines, networks and storage support).
  • Management of the overall service lifecycle.
  • Scalability at the service level with powerful elasticity rules.
  • Service monitoring not only at Infrastructure level but Service and Software (installed in the VM) KPI levels.
  • Interaction with different IaaS providers according to richer placement policies.


PaaS Management

Target usage

The PaaS Management GE will provide to the users the facility to manage their applications without worrying about the underlying infrastructure of virtual resources (VMs, virtual networks and virtual storage) required for the execution of the application components. This means that the user will only describe the structure of the application, the links among Application Components (ACs) and the requirements for each Application Component. This GE will deal with the deployment of applications based on an abstract Application Description (AD) that will specify the software stack required for the application components to run. This software stack will be structured into Platform Container Nodes, each linked to a concrete software stack supporting the execution of a number of Application Components and allowing to structure the Application into several Software Tiers. Besides, the AD will describe the Elasticity Rules and configuration parameters that may help to define how initial deployment is configured (generation of Service Manifest definition) and what initial elasticity rules will be established.


Image:Abstract_Application_Description_and_their_mapping_into_deployed_VM_images.jpg
Abstract Application Description and their mapping into deployed VM images

The main difference between IaaS and PaaS is that IaaS manages the structure and lifecycle of the virtual infrastructure required for applications to run. PaaS on the other hand, manages the structure and lifecycle of the applications and platform containers..

GE Description

Overview

Application description

In the context of a PaaS provider, a description of the Application is required in order to deploy the application onto the platform. This Application Description (AD) will contain:

  • The structure of the application: The Application is structured into Application Components, which in turn are distributed across an number of connected Platform Container Nodes.
  • Requirements in terms of:
  • Software stack linked to each Platform Container Node: webserver technology, application server technology, services composition technology, database technology, etc. The catalogue of available technologies and products will be provided and managed by the platform.
  • Resource execution requirements in terms of CPU speed, disk storage, storage types linked to each Platform Container Node
  • Network resource requirements. This level only specifies these requirements and furthermore the IaaS Service Manager GE will be responsible to manage them.
  • Connection with FI-WARE GEs. In case the FI-WARE Cloud Instance Provider offers additional GEs to be used by the application, like the Data/Context Management GEs or Security GE. In this case, the description of the application should contain details about usage of services provided by the GEs that allow the platform to configure the proper access of the application to these services.
  • Elasticity rules. The client can specify the way the application scale based on measurements.
  • Other requirements can be also included depending on the capabilities of the platform, like backup policies, or placement policies, etc.

The description of the application at this stage is abstract in the sense that the user describes the application without considering how such description will map into a final IaaS Service manifest or changes into existing IaaS deployments. Thus, the user will not describe how Platform Container Nodes will map into VMs.

The users will also define the Elasticity Rules based on KPIs that are relevant from an application perspective (i.e. maximum number of DB connection, maximum number of concurrent access to an Web Server, and so on). It is the PaaS Management GE that knows the way in which these application-oriented elasticity rules will be mapped into elasticity rules handled by the IaaS Service Manager GE.

One possible standard to use as a basis for these abstract ADs is the OVF (defined by DMTF) [OVF 08], which is also used for the description of VDCs in the IaaS context. The support of PaaS concepts could imply the future extension of the OVF to include concepts relevant to PaaS as described above. FI-WARE will carefully monitor results relevant to this matter coming out from the 4CaaSt EU FP7 project.

Deployment Runtime design

Before the application is deployed, a more detailed descriptor of the application is designed based on the requirements specified by the client.

The input of this process will be the Application Description (AD). Based on the requirements imposed on the different Application Components (ACs) the PaaS Manager GE will generate an Application Deployment Descriptor (ADD) which will comprise an initial IaaS Service Manifest or set of changes to existing IaaS deployment that will be delivered to the IaaS Service Management GE. It will also comprise information necessary to install, configure and run the different Application Components. Some examples of decisions taken while mapping an AD into information to be submitted to the IaaS Service Management GE could be:

  • One Platform Container Node requires a large amount of CPU speed and that cannot be provided by one single VM so the Platform Container Node is mapped into a number of VMs with a load balancer in front of then.
  • Two equivalent Platform Containers Nodes (defined over a compatible technology stack) have very small requirements so the system decides to map both into a single VM.

Note that these decisions affect the final network configuration and also the definition of software products included into the different VMs on which the Application Components will finally run.

The process described above is illustrated in the following figure. Once again, the Application Deployment Descriptor (ADD) may be expressed based on an extension of OVF.


Image:Cloud_Deployment_design_process.jpg
Deployment design process


Runtime and Application deployment

Once the Deployment Runtime design is done and the final Application Deployment Descriptor (ADD) generated, the actual provisioning and deployment of the application is performed. This implies planning the sequence of steps that will be required to implement that deployment:

  1. Interaction with the IaaS Service Management GE, for setting-up the VDC on top of which Application Components will run. This will require passing a IaaS Service manifest or a set of changes to existing IaaS deployment to the IaaS Service Management GE.
  2. Installation of the software linked to Application Components and Provision of the connection to other FI-WARE GE services (e.g., Data/Context Management GEs) that the application is going to use.
  3. Start of the different Application Components on which the Application is structured.

Once the first step is finished, the VMs, virtual networks and virtual storage capabilities needed by the Application will be deployed into the IaaS. A given Platform Container Node will be mapped into a single or several VMs that will be instances of VM images taken from the VM image repository. Therefore, the software stack linked to the Platform Container Node is mapped into software configured in the images of the corresponding VMs.

After the VDC is set up, some parameters linked to VMs will be configured for the correct installation of Application Components or the proper connection to complementary FI-WARE GEs (e.g., Data/Context Management GEs). Besides, the installation of the Application Components themselves will be carried out. This process of setting configuration parameters and installing additional software on an existing VM is called contextualisation.

Once the Application Components are installed and correctly configured, the application will be started. The order in which the different application components are started is defined in the application description (AD). This will allow to handle some dependencies between Application Components.

During the execution of the application, monitoring information will be collected and used for various reasons: like scaling of the application, SLO Management, etc.

The application will be finally terminated by the client (final user), this process will free the resources assigned to the application (inside the IaaS and the cloud services used).

Adaptation and scalability of deployed application/services

The PaaS Management GE will allow applications to adapt during execution to the changing demands of users or resource shortages. This could be linked with the SLOs of the application’s SLA in place or scalability rules defined by the application provider.

The scalability capability is closely related with the monitoring system to collect and process the different KPIs that could affect the scaling of the application.

The scalability can affect application components or platform elements (products implementing the software stack linked to Platform Container Nodes) or complementary FI-WAGE GE services integrated as Cloud Services. This layer should know about the characteristics of the different elements to know how they scale and their limitations.

Not only the scale-up (in the same physical host) or scale-out (to multiple VMs) will be supported, also the shrinking of resources will be performed when the environment of the application allows this.

There will be an interaction between the scalability components, the provisioning and deployment layer to create, stop, destroy, and reconfigure VMs, infrastructure and/or network resources.

The PaaS Management GE will drive resource allocation using the underlying IaaS layer according to the elasticity rules.

Critical product attributes

  • A common specification for the definition of the complete application structure and its requirements.
  • The ability to automatically design the final structure of the deployment based on the abstract description of the application and its requirements and restrictions.
  • Automatic management of the low level concepts, allowing the client to focus on the application. The deployment of application containers, database, load balancers and the scalability of the application will be managed by the PaaS layer.
  • Common interface API for multiple and different IaaS providers.


Object Storage

Target usage

The Storage Hosting GE comprises a storage service that operates at a more abstract level than that of block-based storage (e.g. iSCSI-, AoE-type technologies). Rather than storing a sequence of bits and bytes, it stores items as units of both opaque data and meta-data. An example of object-based storage is Amazon’s S3 service offering where data (objects) are stored in buckets (containers of objects).

The users of such a service will be both the FI-WARE Cloud Instance Provider (FCIP) and the FI-WARE Cloud Instance User (FCIU).

  • FCIP usage: The CHP can take two roles: one as consumer of the service and another as its manager. From a consumer perspective, the FCIP will use this system in order to store certain types of data. A good example of this is in the storage of monitoring, reporting and auditing data. That data could then be made available to the FCIUs or not depending on the wants of the FCIP. The Object storage service could also be used as a virtual machine staging area. This has two aspects, the FCIP and FCIU aspects. In the case of the FCIP, a FCIU will upload a virtual machine image to the object storage service and once received, the FCIP will make this virtual machine image available to the FCIU in order to satisfy a particular customized virtual machine request (the case here is that the virtual machine images that the FCIP offers are not sufficient and the user wishes to supply their own). From a management perspective, the FCIP will expect that the system will require as little maintenance as is possible. This entails that where:
  • Stale data exists it should be purged,
  • Deactivated accounts present they are removed,
  • Corrupt data is discovered, it is replaced with a valid replica
  • Issues are discovered, they are raised to an automated service that will attempt to resolve. If they cannot then notifications to the FCIP should be sent.
  • Necessary a full statistics system should be available to inspect system and the user's utilisation of the system
  • New hardware (storage capability) is required that it can be easily added to the system without any drop in service. This will allow the storage capacity to grow over time.
  • FCIU usage: The FCIU will use the object storage service as a means to distribute static content rather than incur the additional load of serving static content from an application. Taking this approach allows the FCIP to optimize the distribution of those files. The FCIP can also use this as a building block to offer further content distribution network capabilities. The FCIU could also use the object storage service as a means to supply a customized virtual machine that only they have access to (the storage is isolated by user). This would follow in a similar fashion to how customized virtual machine images are supplied on Amazon EC2.

GE description

The Object Storage GE in FI-WARE will be composed of loosely coupled components. A potential architecting of this would be to separate the components for API handling, authentication, storage management and storage handling. Separating the main functional components of the system will allow for scaling. Given that the demand on storage systems is to ever increase the storage capacity, the system should scale with commodity hardware horizontally. A high level view of the Architecture of the Object Storage GE could look like:


Image:Object_Storage_GE.jpg
Object Storage GE

The OS GE should support integration of external AuthZ (authorization) and AuthN (authentication) so that integration with Security GEs can easily be accomplished.

In order to remain interoperable and mitigate lock-in, the Object Storage GE will rely on open, patent unencumbered standards for the definition of interfaces and where appropriate will seek to integrate with other related and complimentary ones. These interfaces should implement techniques to allow the migration of data held under the management of the OSS to other services (see CDMI). This is core to providing this as a generic enabler. Currently the most prolific cloud-oriented storage standard is the SNIA’s CDMI specification. This is a specification that is well aligned to other specifications such as DMTF’s OVF and OGF’s OCCI.

The Object Storage GE will offer the basic interaction of CRUD to the objects it stores and manages. Along with this, the OS GE will:

  • Store objects (any opaque data entity) along with user and provider specified metadata into logical (e.g. hierarchical filesystem or tagging) and/or physical groupings (e.g. location). Those grouping in some systems are known otherwise as 'containers'. These stored objects must be then listable according to how they have been grouped.
  • Allow users through meta-data, specify particular qualities of the service (e.g. number of replicas, geographic location of the data) that must be adhered to. This should be done on not only a per-object basis but also on a group of objects.
  • Enable versioning of objects. Every time there is a modification (update, delete) to an existing object, the previous copy to the now current object should be kept.
  • Provide a means to retrieve metrics associated with objects and sets of objects.
  • Provide a means to retrieve audit and account information such as access logs and billing information
  • As mentioned above, we should consider providing other optimal endpoints for integration with other provider infrastructural services
  • Offer a discovery mechanism so that potential clients can discover what service offerings are available. It is taken for granted that the service will be a storage service however a client will need to know what tuneable parameters and service specialisations are available. This is an important interoperability feature.

From a management perspective, the Object Storage GE will:

  • Provide system-wide policies. Though a user might specify certain quality of service attributes through an object's meta-data, it might violate a system-wide, provider set policy. The reconciliation of user and provider requirements should be considered and processed by the system. This aspect would include the notions such as provider-imposed limitations such as per-user rate limiting.
  • Allow for the use of commodity hardware and for new hardware to be easily added to the system at runtime to deal with a growing system. This growth of the system entails that it should be architected so that is scales horizontally.
  • Integrate with different back-end storage systems
  • Integrate with different authentication and authorisation systems. The system should have a pluggable mechanism to enable this.

CDMI API

The SNIA Cloud Data Management Interface (CDMI) [CDMI 11] is the interface that it is used to create, retrieve, update, and delete (CRUD) data contents from the cloud. The CDMI standard uses RESTful principles in the interface design. As part of this interface, the client will be able to discover the cloud storage capabilities offering and to use this interface to manage containers. These containers have the storage units that have the data placed in them. Besides it, Containers may set the data system metadata through this interface.

The main characteristic of the CDMI is that it is planned not only to move data but (and most importantly) metadata from cloud to cloud. When managing large amounts of data with differing requirements, metadata is used to reflect those requirements in a way that underlying data services may differentiate their treatment of the data to meet those requirements.

Management and administrative applications may also use this interface to deal with containers, domains, security access, and monitoring/billing information. Moreover, the storage functionality is accessible via legacy or proprietary protocols too in order to allow the utilization of legacy cloud storage system.

This standard manages important attributes from the point of view of cloud services, like pay as you go, the illusion of infinity capacity (through elasticity), and finally use and management simplicity which allow to

The common operations that CDMI manage are the following:

  • Discover the Capabilities of a Cloud Storage Provider
  • Create a New Container
  • Create a Data Object in a Container
  • List the Contents of a Container
  • Read the Contents of a Data Object
  • Read Only the Value of a Data Object
  • Delete a Data Object


Critical product attributes

  • Horizontally scaling service that allows for the dynamic addition of commodity hardware
  • Can integrate a number of different backend storage facilities
  • Implements open and standard API to clients
  • Offers efficient management access
  • Can be integrated with other provider infrastructural services (TBD)
  • Offers storage services with client specified qualities of service
  • Enables provider policies to govern client requirements
  • Pluggable back-end authentication services
  • Monitoring, logging and statistics system for both consumer and provider

IaaS Cloud-edge Resource Management

Target usage

At first glance computing and storing in the cloud opens unlimited perspectives to the services proposed to the end user: the cloud virtual platform, made of a mass of high-performance servers connected via many high speed links, seems to be an inexhaustible resource in term of computing and storage capabilities.

However, the link between the cloud and the end consumers (as well as many SMEs) appears to be a weak point of the system: it sometimes may be unique (therefore a single point of failure) and it may offer a relatively low bandwidth in some scenarios. Actually, typical ADSL bandwidths are in the range of several Mbit/s while a private LAN like a home network at least offers 100 Mb/s and more and more commonly 1 GB/s. One could indeed argue that new home connection based on optical fibre technologies shall increase this bandwidth to 100 Mb/s and even more. However, we can reasonably consider that, in the constant movement of technology improvement, carrying a bit inside a home network shall remain less costly and more effective than carrying a bit between the home network and the cloud. And, even if the bandwidth goes up, the uniqueness of the link remains.

For these reasons, it may be helpful to use an intermediate entity, which we call “cloud proxy”, located in the home system and equipped with storage and cloud hosting capabilities. The cloud proxy is the cloud agent close to the user. It may host part of a cloud application, therefore handling communications between cloud applications an the end user, even when the communication with centralized DataCenters is down or the user is not active anymore; it may provide intermediate storage facilities between the user and the cloud, etc.

The cloud proxy may first be a hardware device with two advantageous characteristics. First, it is always powered on making possible a permanent connection with the cloud, independently of the presence of the end-user. Secondly, it is connected with each device of the home network making possible that each end user, whatever his device is, can take benefit of the cloud proxy presence. Such hardware with these two characteristics typically corresponds to the home gateway. The home gateway (i.e. the cloud proxy) may be owned by the ISP (typically playing the role of FI-WARE Cloud Instance Provider or privileged partner of the FI-WARE Cloud Instance Provider) which makes it available for the user.

The cloud proxy may also comprise a number of software components. Connecting each device in an appropriate manner (i.e., with the ad hoc protocols) may require computer skills if this operation is not automated. The software part of the cloud proxy would be owned by the FI-WARE Cloud Instance Provider (who might be the ISP or a privileged partner of the ISP, therefore guaranteeing a trusted environment execution). One purpose of the software “cloud proxy” may be to ensure the automation of the connection to the diverse devices making the home equipment. It may also to provide the appropriate computing platform to host a program that the cloud may advantageously chose to download and execute locally on the cloud proxy. This computing platform may include middleware through which local applications and cloud applications can interact in the purpose to offer the best service to the end-user.

The picture below illustrates the here outlined concept of cloud proxy.

Image:Cloud_Edge_and_Cloud_Proxy.jpg
Cloud Edge and Cloud Proxy

Application developers shall be aware of the existence of a cloud proxy, as far as it may help them to build applications/services providing a better experience to the end-users. Cloud proxy functions in FI-WARE will be available to Application developers or other GEs in FI-WARE through a set of APIs that still have to be defined. The IaaS Cloud-edge Resource Management GE comprise those functions that enable to allocate virtual computing, storage and communication resources in cloud proxies. It provides an API that may be exposed as part of the Self-Service API provided by FI-WARE Cloud Instances. Given the fact that it will also allow to deploy Virtual Machines (VMs) that comprise a standard technology stack, they shall also be considered as one element of the PaaS offered to application/service developers.

Several use cases are given here to illustrate the concept of cloud proxy presented in the here above chapter and how applications developers may take profit of it.

From Cloud to User: download a Video On-Demand (VOD) catalogue

The user has subscribed a VOD service. In addition to the simple delivery of movies, the VOD service also includes a browsing service including the visioning of trailers. The reaction speed of the browsing service is a key point of its attractiveness. Storing locally the VOD database, in particular trailers (at least part of them), helps to improve the reaction speed, because access to the database is through the high speed private LAN instead of the ADSL link. The VOD cloud application may download on the cloud proxy a part of its database and an associated application which offers to the user a browsing service. The exact inter-operation way between the VOD cloud application and the downloaded browsing application has to be specified.

The VOD application developer shall be aware that the cloud proxy has storage capabilities and is able to store part of the VOD database. Ways for getting or negotiating how many storage capabilities the cloud proxy offers to the VOD application have to be specified. Knowing the amount of storage available on the cloud proxy, the VOD application may decide which part of its data shall be downloaded and where (on the cloud proxy). When the user shall request for a piece of data downloaded on its cloud proxy, the application shall re-address the request to the corresponding local storage. Exact mechanism for re-addressing has still to be specified.

From User to Cloud: upload personal pictures

The user has subscribed a picture storing and sharing service (typically like Picasa). He wants to upload a picture album, i.e. a certain number of pictures. Due to the relative low bandwidth offered by the ADSL link, total time for uploading all the pictures in the cloud may represent several hours, constraining the user to remain online several hours. Alternatively, pictures to upload may be temporarily stored on the intermediate storage capabilities offered by the cloud proxy. Time for uploading pictures from the user device to the cloud proxy may be relatively short (some minutes). The cloud proxy will then be in charge of uploading the pictures in the cloud as a background task, letting the user free to leave the home network. Exact organization and relationships between the user device, the cloud proxy and the service in the cloud have to be specified.

The service may be offered to the end-user by cascading two applications: one running on the cloud proxy, which offers an interface to the end-user and ensures the intermediate storage; and the second one running one the cloud which ensures the final backup. The application running on the cloud proxy may be provided by the application/service provider and the combination of the two applications may be seen by the user as one cloud application.

Peer-Assisted application: offload a distributed storage system

Storage services in the cloud may require important storage resources. To avoid a huge increase of storage capabilities need in the cloud, the idea to offload part of the stored data down to end users has been developed. Distributed storage systems (with more or less storage in the cloud) are proposed: the user exchanges part of his storage capabilities with storage capabilities in the distributed storage system. In other words, the distributed storage system organizes the exchange of data between users, a user hosting data belonging to other users while this user's data is also hosted by other users. One important element of such a system, beyond the amount of storage capabilities which a user makes available to the system, is the availability of these storage capabilities. A user only present one hour per day, thus making available his storage capabilities to the system only one hour per day, contributes less than a user present 24 hours a day. However, to be present 24 hours a day is a strong constraint for user. It is not for a cloud proxy which could make available its storage capabilities to the system, on behalf of the user. And, for its part, the cloud proxy could make available to the user the storage space thus gained in the distributed storage system.


GE Description

The cloud proxy matches to a hardware device where a number of virtual computing, storage and communication resources can be allocated to support the execution of application components or even full applications. The IaaS Cloud-edge Resource Management GE corresponds to software running on the cloud proxy that will enable the deployment and lifecycle management of Virtual Machines comprising application components or full applications plus technology stack (operating system, middleware, common facilities) they require to run.

The IaaS Cloud-edge Resource Management GE comprises a VM management module that is in charge to start, stop and monitor execution of a VM when required. The VM to be deployed will be based on a VM image previously downloaded based on functions also supported by the IaaS Cloud-edge Resource Management GE. Communication with this VM management and VM image download modules may be based on an APIs capable to rely on a optimized use of communications between the cloud proxy and cloud data centers. The exact specification of this API has still to be defined. Using this API, the IaaS Service Management GE should be capable to deploy service manifests (deployment descriptors) which include the distribution of applications partly in centralized data centers and partly on a highly distributed network of cloud proxies.

Note that, to some extend, the IaaS Cloud-edge Resource Management GE software that run on cloud proxies play a role similar to that of the IaaS DataCenter Resource Management GE but being able to work at the scale of a single device (the home gateway) or group of devices (federated home gateways). That’s why to some extend a cloud proxy may be referred also as a “nano-datacenter”.

FI-WARE will not only define and develop the software linked to the IaaS Cloud-edge Resource Management GE but also software linked to middleware technologies and common facility libraries that will be used in VM images to be deployed in cloud proxies. This way, cloud proxies become one type of Platform Contanier Node in the FI-WARE PaaS offering. Middleware technologies will ease communication between applications running on the cloud proxy with a) applications running on devices associated to the home environments b) applications running in data centers or c) applications running on another proxies. Common facilities will ease access from applications to storage resources located at cloud proxies (and maybe shared across several cloud proxy devices) or to some cloud proxy device capabilities (camera, sensors, etc if any). These middleware and common facility technologies will be defined and developed as part of the I2ND (Interface to Network and Devices) chapter.

Critical product attributes

  • Cloud proxy as evolution of the home hub, able to federate the connected devices and expose functionalities to support a large variety of service bundles
  • Capability to host applications and govern allocation of computational resources beyond centralized data centers.


Resource Monitoring

Target usage

It has been said that "If you can’t measure it you can’t manage it", and this maxim is as true in computer science as in industrial management. Ability to monitor the different GEs (via exposed metrics) and their interaction is an absolute requirement in order to support SLOs and SLAs, as well as investigate causes and pin responsibility for SLO infringements. The same goes for the infrastructure of the cloud, including hardware, OS and driver setup, host scheduling and resource sharing decisions, etc.

Monitoring data is also indispensable in order to provide dynamic scaling of resource deployment by supervisory processes. Metrics provided by a monitoring system will enable a provider to understand usage of the system and know when to scale a sub-system. Metrics should be offered to users of cloud hosting so that they may build their own self-management systems or as an offer to enable further trust and visibility into their provisioned services/resources.

GE description

Although monitoring is a crosscutting concern for the GEs, the specific metrics are GE-specific. Each GE would define a set of metrics to be provided by it, some of them raw metrics and some computed from combinations of other data. Whether a metric is raw or computed is an implementation matter for a GE, the user need not know which is which. The figure below outlines the possible implementation of monitoring by a GE.


Image:Implementation_of_monitoring_in_a_GE.jpg
Implementation of monitoring in a GE

Monitoring configurations should include the following dimensions:

  • Metrics to monitor
  • Metric configuration
  • Granularity of monitoring
  • Overhead targets
  • Distribution mode (push/pull) for the collected data

Data collection is very much dependent on the specific GE and metric.

The work of monitoring is not finished once the data is collected. The data actually collected may need to be post-processed before being provided to its users. User-defined metrics should be offered. This calls for supporting services (e.g., CEP). A storage system is needed to store the metrics. Additionally, data retention rules may apply to some of the data, for business or legal purposes in order to be available for audits or failure analysis.

The collected data needs to be distributed to interested parties using a publish/subscribe messaging system supporting both push and pull protocols. Metrics should only be distributed in a push fashion when a user or system set threshold is reached. These thresholds should be simple and let further processing of the metric data done by subscribers.

Finally, since the GE providers are the best in interpreting the GE's monitoring data, a Monitoring and Metering GE may go one step further and provide additional analysis of the collected data, highlighting performance anti-patterns in the data or identifying high-level effects that may follow from the observed phenomena.

Question Marks

We list hereafter a number of questions that remain open. Further discussion on these questions will take place in the coming months.

Security aspects

Very much like monitoring, security is a cross cutting concern of all generic enablers and components within the FI-WARE Cloud Hosting chapter. Whereas monitoring mainly deals with aspects "of the moment", security has additional dimensions to consider namely pre- and post- execution of any action initiated by actors of the system. The common areas to consider aspects of security are:

  • Authentication (AuthN): This is one of the pre-execution dimensions. Users must first be authenticated to carry out various operations upon the resources and services that the cloud hosting chapter will offer. For efficient operations in a provider who may have multiple but related services, this authorisation should be of the single-sign-on type.
  • Access/Authorization (AuthZ): Again this is another pre-execution dimension. Although a user may be authenticated, they may not have the credential to carry out certain actions.
  • Audit & Accounting: This dimension is most closely related to monitoring. Audit and accounting operations are taking when decisions and actions are made throughout the lifecycle of a service and form a historical record of all operations taken upon services. By exposing as much auditing and monitoring information to the client, that client will be imbued with greater trust in the service provider.

Related to post-execution dimensions these could be seen more related to the decommissioning policies that a provider enforces. An example of such a decommissioning policy might be all previous machine images are securely wiped or another may be that all audit and accounting logs must be kept for a specific period of time after decommissioning. As a result it could be required that policy GEs be required to enable such policy enforcement.

Other prescient questions that need resolution are:

  • What Security GEs are suitable for integration within the Cloud Hosting chapter?
  • How should this integration occur? What APIs and transports are offered?
  • Are/will those GE's interchangable with systems that may be specific and linked to the Cloud Hosting chapter?

Other topics still under discussion

Following is a list of some questions still to be address. They are listed here so that the reader finds out that they haven’t been ignored.

  • What value-add services beyond basic PaaS (and Object Storage) capability do we plan to provide? E.g., DB, messaging/queuing, etc.
  • What level of integration do we envision between the "Cloud Edge" GE and other GEs?
  • What is the exact division of labor and interfaces between the Cloud Edge Resource Management GE in this chapter, and the Cloud Edge GE in the I2ND chapter?
  • What capabilities of I2ND GE, particularly NetIC or S3C Ges, will be leveraged by our service management layer?
  • What monitoring capabilities do we need to provide, end-to-end?
  • Should we prioritise horizontal scaling (the “cloud” way) over vertical scaling?
  • What is the exact scope of what we are going to provide in the area of Metering, Accounting and Billing, and what capabilities will we (or the Cloud Hosting users) be able to leverage from the Business Framework provided by the Apps/Services Ecosystem and Delivery chapter?
  • There could be the need (e.g. from monitoring) for a complex event processing system. Can we use the CPE GE from another the Data/Context Management chapter (presumably yes)?
  • It seems that there will be the necessity for messaging –based communication. Is a single technology going to be used all across FI-WARE?


Terms and definitions

This section comprises a summary of terms and definitions introduced during the previous sections. It intends to establish a vocabulary that will be help to carry out discussions internally and with third parties (e.g., Use Case projects in the EU FP7 Future Internet PPP)

  • Infrastructure as a Service (IaaS) -- a model of delivering general-purpose virtual machines (VMs) and associated resources (CPU, memory, disk space, network connectivity) on-demand, typically via a self-service interface and following a pay-per-use pricing model. The virtual machines can be directly accessed and used by the IaaS consumer (e.g., an application developer, an IT provider or a service provider), to easily deploy and manage arbitrary software stacks.
  • Platform as a Service (PaaS) -- an application delivery model in which the clients, typically application developers, follow a specific programming model to develop their applications and or application components and then deploy them in hosted runtime environments. This model enables fast development and deployment of new applications and components.
  • Virtual Appliances (vApp, also referred to as "service") -- pre-built software solutions, comprised of one or more Virtual Machines that are packaged, updated, maintained and managed as a unit. Virtual appliances are typically packaged in an Open Virtualization Format (OVF), developed by Distributed Management Task Force (DMTF) standardization body.
  • Key Performance Indicators (KPIs) -- quantifiable metrics reflecting the level of offered service with respect to specific non-functional requirements such as performance, availability, resiliency, etc. KPI is usually computed as a function of one or more low level metrics – analytical, quantative measurements intended to quantify the state of a process, service or system. KPIs may relate either to the long term measures of the service level where raw metrics are averaged and summarized over a long time scale to guide strategic decisions about the service provisioning, or short term measures of service level, triggering proactive optimization.
  • Service Elasticity is the capability of the hosting infrastructure to scale a service up and down on demand. There are two types of elasticity -- vertical (typically of a single VM), implying the ability to add or remove resources to a running VM instance, and horizontal (typically of a clustered multi-VM service), implying the ability to add or remove instances to/from an application cluster, on-demand. Elasticity can be triggered manually by the user, or via an Auto-Scaling framework, providing the capability to define and enforce automated elasticity policies based on application-specific KPIs.
  • Service Level Agreement (SLA) is legally binding contract between a service provider and a service consumer specifying terms and conditions of service provisioning and consumption. Specific SLA clauses, called Service Level Objectives (SLOs), define non-functional aspects of service provisioning such as performance, resiliency, high availability, security, maintenance, etc. SLA also specifies the agreed upon means for verifying SLA compliance, customer compensation plan that should be put in effect in case of SLA incompliance, and temporal framework that defines validity of the contract.
  • Cloud Proxy devices are located outside the Cloud. May correspond to end-user devices (any device the user may use to interact with cloud applications like a PC or a tablet, but also sensors, displays…) or to a more complex structure like a home gateway, i.e., a special device located in the home network. Cloud proxies provide the ability to host applications or use storage resources located closer to the end user, intended to provide an improved user experience.
  • CDMI. The Cloud Data Management Interface (CDMI) defines the functional interface that applications will use to create, retrieve, update and delete (CRUD) data elements from the Cloud defined by the Storage Networking Industry Association (SNIA) group
  • Cloud Service Management API -- a RESTfull, resource-oriented API accessed via HTTP which uses XML-based representations for information interchange and allows deployment of OVF-based service manifests or manifest fragments (thus enabling incremental deployment). It supports extensions to the Open Virtualization Format (OVF) in order to support advanced Cloud capabilities and is based on the vCloud specification (published by VMware) and submitted to the DMTF for consideration.
  • OCCI is a protocol and API for the management of cloud service resources. It comprises a set of open community-lead specifications delivered through the Open Grid Forum. OCCI was originally initiated to create a remote management API for IaaS model based Services. It has since evolved into a flexible API with a strong focus on integration, portability, interoperability and innovation while still offering a high degree of extensibility.


References

[VMWare 09]

VMWare. vCloud API Programming Guide, Version 0.8.0. Online resource., 2009. http://communities.vmware.com/static/vcloudapi/vCloud

[TCloud 10]

API Programming Guide v0.8.pdf. Telefónica. TCloud API Specification, Version 0.9.0. Online resource., 2010. http://www.tid.es/files/doc/apis/TCloud API Spec v0.9.pdf.

[OVF 08]

DMTF. Open virtualization format specification. Specification DSP0243 v1.0.0d. Technical report, Distributed Management Task Force, Sep 2008. https://www.coin-or.org/OS/publications/optimizationServicesFramework2008.pdf

[TCloudServer]

Tcloud-server implementation, http://claudia.morfeo-project.org/wiki/index.php/TCloud_Server

[ServiceManifest 10]

DMTF. the distributed management task force webpage. Online resource.,2010. http://www.dmtf.org.

Service manifest definition DMTF’s OVF

[Cáceres et al. 10]

J. Cáceres, L. M. Vaquero, L. Rodero-Merino, A. Polo, and J. J. Hierro. Service Scalability over the Cloud. In B. Furht and A. Escalante, editors, Handbook of Cloud Computing, pages 357–377. Springer US, 2010. 10.1007/978-1-4419-6524-0 15.

[EC2SLA]

Amazon EC2 SLA

http://aws.amazon.com/ec2-sla/

[S3SLA]

Amazon S3 SLA

http://aws.amazon.com/ec2-sla/

[GoGridSLA]

GoGrid SLA

http://www.gogrid.com/legal/sla.php

[RackspaceSLA]

Rackspace SLA

http://www.rackspace.com/cloud/legal/sla/

[GoogleAppsSLA]

Google Apps SLA

http://www.google.com/apps/intl/en/terms/sla.html

[AzureSLA]

Microsoft Azure SLA

http://www.microsoft.com/windowsazure/sla/

[Reservoir-Computer2011]

Benny Rochwerger, David Breitgand, A. Epstein, D. Hadas, I. Loy, Kenneth Nagin, J. Tordsson, C. Ragusa, M. Villari, Stuart Clayman, Eliezer Levy, A. Maraschini, Philippe Massonet, H. Muñoz, G. Tofetti: Reservoir - When One Cloud Is Not Enough. IEEE Computer 44(3): 44-51 (2011)

[Reservoir-Architecture2009]

Benny Rochwerger, David Breitgand, Eliezer Levy, Alex Galis, Kenneth Nagin, Ignacio Martín Llorente, Rubén S. Montero, Yaron Wolfsthal, Erik Elmroth, Juan A. Cáceres, Muli Ben-Yehuda, Wolfgang Emmerich, Fermín Galán: The Reservoir model and architecture for open federated cloud computing. IBM Journal of Research and Development 53(4): 4 (2009)

SchadDJQ-VLDB10

Jörg Schad, Jens Dittrich, Jorge-Arnulfo, Quiané-Ruiz,

Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance,

In Proceedings of The Vldb Endowment, Vol 3, pages 460—471, 2010}

DejunPC-2011

Jiang Dejun, Guillaume Pierre, Chi-Hung Chi,

Resource Provisioning of Web Applications in Heterogeneous Clouds, in Proceedings of the 2nd USENIX Conference on Web Application Development, 2011

VMware-CloudArch1.0

Vmware, Architecting a vCloud, Technical Paper, version 1.0

AppSpeed

VMware vCenter AppSpeed User’s Guide

AppSpeed Server 1.5

CloudFormation

Amazon EC2 Cloud Formation http://aws.amazon.com/cloudformation/

BenYehuda-ICAC2009

Muli Ben-Yehuda, David Breitgand, Michael Factor, Hillel Kolodner, Valentin Kravtsov, Dan Pelleg: NAP: a building block for remediating performance bottlenecks via black box network analysis. ICAC 2009: 179-188

[Chapman 10]

C. Chapman, W. Emmerich, F. G. Márquez, S. Clayman, and A. Galis. Software architecture definition for on-demand cloud provisioning. In HPDC ’10: Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, pages 61–72, New York, NY, USA, 2010. ACM.

[CDMI 11]

Cloud Data Management Interface Version, Working Draft, version 1.0.1h, March 30, 2011. http://cdmi.sniacloud.com/

[REST]

"Representational State Transfer” - http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm

[RESTful Web 07]

Richardson, Leonard and Sam Ruby, RESTful Web Services, O'Reilly, 2007.

[OCCI_GCDM]

A. Edmonds, T. Metsch, and A. Papaspyrou, “Open Cloud Computing Interface in Data Management-related Setups,” Springer Grid and Cloud Database Management, pp. 1–27, Jul. 2011.

[Vaquero et al. 11]

L. M. Vaquero, J. Caceres and D. Morán, The Challenge of Service Level Scalability for the Cloud, International Journal of Cloud Applications and Computing, Volume 1, Number 1, 2011, pp 34-44

Personal tools
Create a book