Way before clouds were popular (remember then?) my colleagues Kate Keahey and Tim Freeman started work on their workspace service, a system for on-demand creation and management of virtual machines on remote computing systems. They now have an implementation that interfaces both to clusters running conventional schedulers and to Amazon EC2. It's distributed as part of the Globus software, or you can download it separately.
Kate and Tim have recently established a deployment of the workspace service in the Computation Institute at U.Chicago and Argonne. With a nod to the new cloud meme, they've named it Nimbus. They say:
The University of Chicago Science Cloud, codenamed "Nimbus", is a web service that delivers compute capacity in the cloud for scientific communities. The Nimbus' simple client allows you to obtain customized compute nodes (that we call "workspaces") that you have full control over quickly, easily, and in ways that can be fully automated. Using the Nimbus cloud you can request the exact compute capability you currently need for your application and scale it up or down as your needs dictate.
Nimbus provides compute capability in the form of Xen virtual machines (VMs) that are deployed on physical nodes of the University of Chicago TeraPort cluster using the workspace service. We currently make 16 nodes of the TeraPort cluster available for cloud computing. Nimbus is available for members of scientific community wanting to run in the cloud. To obtain access you will need to provide a justification (a few sentences explaining your science project) and a valid grid credential (If you don't have a credential, email us. We can help). Based on the project, you will be given an allocation on the cloud. Send your requests, demands and cries of anguish to [email protected] (for cries of anguish mp3 format is acceptable).
In a typical session you will make a request to deploy a workspace based on a specified VM image. You can either use one of the VM images already available on the cloud (we provide a command that allows you to see what's already there) or upload your own VM image. On deployment, the image will be configured with an ssh public key you provide -- in this way once the workspace is deployed, you will be able to ssh into it and configure it further, upload data, or run your applications. Have fun!
So I'm having a tough time grasping the distinctions of Cloud Computing and why there is so much hype around it. What you just described above reminds me of something Larry Ellison was pitching in the mid 90's except he called it, "The Network Computer". http://en.wikipedia.org/wiki/Network_computer Essentially a thin client running off of a hosted system whether it be in their organization in one in a data centre somewhere on the web. All your applications sat in the data centre and used that data centre's compute power. The concept of Cloud Computing is the same as what Larry was pitching years ago except it has become easier to do.
Why has it become easier to do? Is it the expertise level has dropped and the tools and bandwidth are better now. What really is the difference between the database that's in our local data centre as opposed to the one hosted miles away? A longer cable perhaps?
What I have found to be the best definition is something that dynamically re-provisions based upon workload and can handle service peaks and troughs. Else wise Cloud Computing is hosted applications, something that's been around for a while.
Possibly you could provide a better explanation than the one I just gave.
Wikipedia butchers CC http://en.wikipedia.org/wiki/Cloud_computing
Posted by: Chris Vaughan | April 17, 2008 at 07:03 AM