【文章內(nèi)容簡介】
en hypervisors on the hosts via SSH for virtual machine manipulation.? OS FarmOS Farm [18] is a service for generating and storing Xen VM images and Virtual Appliances. We use OS Farm as a tool for virtual machine template management.This design philosophy could bring scalability on the one hand and keep the autonomy of data centers. The Cumulus frontend does not depend on a speci?c Local Virtualization Management System. The data center inside the Cloud could de?ne its own resource management policy such as IP address lease or virtual machine resource allocation. Cumulus frontend: reenforcement of the Globus virtual workspace serviceThe Globus virtual workspace service contains two pieces of software:? the workspace service frontend, ? the workspace control agent.The workspace service frontend receives the virtual machine requirements and distributes them to various backend servers. The workspace control agent is installed on each backends of a Cloud service and thus be transparent to deploy a workspace.In principal the Globus virtual workspace service could be employed as a Cloud service frontend. However we identi?ed some limitations of the Globus virtual workspace service:?Computer Data centers in general run their own specific Local Virtualization Management Systems(LVMS), like OpenNEbula or VMware Virtual Infrastructure, to manage their local infrastructures. The Globus virtual workspace service however demands to install the workspace control agents directly on the backend servers. This use scenario provided by Globus workspace service lacks the generality. ?Globus virtual workspace service provides three network settings: the AcceptAndConfigure mode, the AllocateAndConfigure mode and the Advisory mode. Ingeneral users do not care about network configuration for virtual machines. Users only demand the virtual machine IP address in order to access virtual suggest that a network IP address lease is associated with virtual machine resource allocation within clouds. The network setting could be provided by the backends of a Cloud service and thus be transparent to end users.We reengineered the Globus virtual workspace service to adapt it as a Cumulus frontend service. This prised the following steps:? Remove the control agent such that the Globus virtual workspace service could talk directly to the virtual machine hypervisors installed on the backend servers.? Extend the Globus frontend service and make it work with various virtual machine hypervisors and LVMS,such as OpenNEbula, VMware server and VMware Virtual Infrastructure.? Support a new networking solution – the forward mode: users don’t need to input network con?guration information, the backend servers allocate the IP address for the virtual machine and return it to the users. OpenNEbula as Local Virtualization Management SystemThe OpenNEbula is used to manage our distributed blade servers. It provides the resources for virtual machine deployment. Currently the OpenNEbula employs NIS (Network Information System) to manage a mon user system and NFS (Network File System) for OpenNEbual shared directory management. However it has been widely recognized that NIS has a major security ?aw: it leaves the users’password ?le accessible by anyone on the entire network. To employ OpenNEbula in a more professional way, we merged OpenNEbula with some modern secure infrastructure solutions like LDAP [13] and Oracle Cluster File System [30]. OS Farm for virtual machine imageWe based our virtual machine image repository on the OS Farm service. OS Farm renders two interfaces for Cloud users:? A Web interface where users can input parameters for virtual machine image construction, and? A HTTP service which could be accessed via wget, for example: wget amp。filetype=.taramp。group=coreamp。group=baseWe have built an OS Farm client and embedded it into the Cumulus frontend. When users require virtual machine images, the Cumulus frontend service invokes a wget mand to generate virtual machine images for users. This implementation frees users from the pain to generate and submit virtual machine images manually. Networking solutionWe provide a new networking solution: the “forward”mode. Users do not have to specify anything for the network requirements. The OpenNEbula starts the Xen virtual machine images, allocates dynamic IP addresses for the virtual machines and then returns them to the users. In addition the backend servers could implement some plex ? Support a new networking solution – the forward network management policies and return IP addresses to the users. The new networking solution is transparent to users, thus it is named as the “forward” mode. Access to the Cumulus serviceThe Cumulus service provides the following access? via Globus virtual workspace service client or Nimbus cloudkit client,? with Grid puting workbench or existing Grid portals, like gEclipse [10], Open Grid Computing Environment [15] and GridShell [16].The gEclipse project [9] is an international research effort, which aims to build an integrated workbench framework to leverage the power of existing Grid infrastructures. The gEclipse project plans to extend itself to access our Cumulus Cloud service by the following steps:1. create a VO for the Cumulus user, 2. provide a Grid credential,3. browse the available virtual machine images,4. launch the desired virtual machines,5. access the virtual machine.We also expect to integrate the OS Farm into the gEclipse platform using its generic Grid connection concept. In the gEclipse project, a connection allows the user to link to a local or remote ?le system. To establish a connection, users need only to provide the necessary. Typical use casesIn the following we present several typical use cases and patt