Computing in the cloud

21.12.2006

Let's start off with a look at what the system is, how it works, and a brief history of it. EC2 is not, in fact, the first of this type of service that Amazon has launched; it's an outgrowth of an existing platform called Amazon Web Services. Back in March of 2006, Amazon released its (S3), online metered storage that costs 15 cents per gigabyte per month of storage used, plus 20 cents per gigabyte of data transferred. It uses standard REST and SOAP interfaces.

In July of 2006, Amazon followed on with the (SQS), a scalable hosted queue that stores messages as they travel between computers. It's designed to let developers easily move data between distributed application components, while ensuring that messages aren't lost.

It can be used to transfer messages even when individual components aren't currently available --- once a component is available, it's sent to it from the queue. Again, it's a metered model; costs are 10 cents per 1,000 messages sent, and 20 cents per gigabyte of data transferred. Like S3, it uses Representational State Transfer and Simple Object Access Protocol interfaces.

In both instances, the technology wasn't developed from scratch. Instead, Amazon used its own internal infrastructure and technologies and made them available to developers.

EC2 continues in that tradition. Put in the simplest terms, Amazon rents out virtual servers, which it called "instances," from its data centers, which are grids. Each instance has the approximate power of a server with a 1.7Ghz Xeon processor, 1.75GB of RAM, a 160GB hard drive, and a 250M bit/sec. Internet connection which can communicate in bursts of up to 1G bit/sec.