Elastic computing has changed the way I think about server-side web development. The idea that an application can scale itself automatically as more resources are needed is extremely powerful. Nouncer uses its own proprietary elastic technologies to allow every component to fork into multiple instances in order to accommodate the need for more computing power. This technology is at the core of what makes Nouncer a great platform for the volume and frequency of user interaction associated with micro-blogging.
A bit of an introduction…
A few months ago Amazon announced their new web service called EC2 which stands for Elastic Compute Cloud. The idea is pretty simple and powerful. You use an API call to “create” a server and install your software on it. Everything works like a real server, and if you need more power, you call the API again and request another server. If you no longer need the extra power, you shut the extra servers down (with an API call). You only pay for the actual time you used each “created” server.
Amazon did not invent the concept, but they did make it trivial to use, and with their reputation on the line, they are committed to make it a reliable and competitive platform. Elastic computing is the result of recent improvements made in the area of virtualization, which is the execution of multiple operating system entities on a single hardware.
Image your desktop at home running Windows at the same time it is running Linux. Desktop virtualization is done in the form of one operating system hosting another (something Mac users are very familiar with running Windows inside OS X). Server virtualization is done by running a light virtualization operating system (usually Linux-based) which does not provide any other functionality beside hosting other platforms. Virtualization has reached a certain maturity lately thanks to significant improvements in hardware, mostly in built-in CPU support for sharing the same hardware between multiple operating systems.
Another related concept is clustering, in which multiple servers are connected together to act as one. Future services allowing the combination of clusters with elastic resources, can produce powerful results. Imagine instead of just allocating new virtual hardware for your website, you will be able to “upgrade” it. Of course, adding more CPUs with an API call requires that your application can benefit from it, which is not guaranteed.
Many developers today are reversing the trend of multithreaded development due to the high cost of ownership (and because most developers are not good at it). But being able to just make the server stronger, as opposed to running multiple instances of your application will allow quicker development time.
Amazon’s commoditization of elastic computing will have significant impact on the way web services are developed. One obvious utilization is in memory caching systems such as memcached. The idea behind them is that database access is slow, and at the same time, users on average tend to access the same data. Combine the two and you get a straight forward optimization in the form of a memory cache – a service sitting between the web server and the database.
On each user page request, the web server goes to the cache to see if the data is there, if not, it goes to the database, grabs the data and stores it in the cache before serving it back to the user. The next time someone asks for the same data, it is already in the cache which is much faster than the database. The last piece of the cache puzzle is size limit.
Since computers have limited memory, the cache must not grow beyond the allocated resources, and so, only retain the most used data when running out of space. Memcached allows running multiple cache server each storing a different subset of the data, hence overcoming the memory limitation of a single installation.
Memcached is a brilliant and simple solution used by the world largest websites. The idea of combining it with services such as EC2 is the next logical step – dynamically add memcached server as you need. But this is not enough to create a truly elastic web server framework. Databases can be scaled using various tools such as replication, and the above example demonstrate how the elastic architecture can be applied to the cache layer, but what needs to be implemented is native elastic support within the web server itself to allow it to grow.
There are many great commercial products out there helping to achieve just that such as BigIP, but they are expensive and force you to pay upfront for the ability to scale. That is not what elastic is all about – a pay as you go approach to resources. There open source tools but they are not yet fully integrated with on-the-fly elastic resources.
I expect many great solutions to emerge from the open source community over the next few year, taking advantage of elastic services. I also expect EC2 clones to emerge from many of the major vendors. Web site owners today can pay for the exact storage and bandwidth they use, and with EC2 pay for the computing power they use. But once software and hardware mature to the point where site owners can simply pay on a user-per-hour basis for a particular configuration – which is where all this boils down to.
Liked this post? Follow this blog to get more.