What Amazon is not telling you about AWS?

Photo credit: http://www.freeimages.com/photo/server-at-night-1199726

Amazon EC2 is a wonderful cloud service, as it provides elastic compute services at a reasonable cost. However, this is accomplished by sharing physical servers and other compute resources among many different users. Hardware virtualization is not a new technology, as IBM Mainframe servers had it more than 50 years ago [1].

What’s new here is an open access to it on the public internet and pay as use scheme, as earlier generations of data-centers were confined to corporate or academic worlds. However, sharing any resource publicly, whether it is a highway or computer, results in traffic jams. In a public cloud, it may result is unexpected delays for your compute jobs.

It is well known [2] that there are performance differences between private and public cloud, among which the following stand out:

  1. In a public cloud, users generally don’t know the exact physical configuration of the machines that their jobs are running on.
  2. Users in a public cloud server often share their hardware with other users with unknown job profiles, which results in noisy neighbor issues [3].
  3. Users often don’t know which VM is right for their jobs, and thus end up either under or over provisioning their cloud servers. Former results in slow performance while the latter is a waste of their money.

So given the above three problems, is there then any solution or users must settle for sub-optimal performance whenever using a public cloud? We will explore answers in the next blog.

References:

[1] https://en.wikipedia.org/wiki/Timeline_of_virtualization_development

[2] http://www.rackspace.com/blog/hk/2014/06/19/understanding-performance-differences-between-private-and-public-cloud/

[3] http://www.liquidweb.com/blog/index.php/why-aws-is-bad-for-small-organizations-and-users/