In my previous blog post I looked forward to VMworld 2014. Last week during VMworld I attended some sessions covering the topics I discussed in that blog post. There is a lot of detail I could go into, but for now I just want to focus on how Containers and Project Fargo fit together. Or to put it in geeks terms: Fargo + Docker = Elasticity.

A quick refresh: Project fargo is a technology which allows you to clone a VM in a running state. At VMworld I learned that it takes 500 milliseconds to spin up a new VM, OS booted and everything. Some kind of guest customization is used to change the hostname, mac address and IP. The “root VM” is in a frozen state. So it’s booted and may even have an application running but its CPU is frozen en RAM is read only. All child VMs use a Copy on write mechanism for the RAM so the “root Ram” is never overwritten.

Knowing all this, one can easily see a lot of problems for any application that might be running in the root VM. Whenever a clone comes online the mac, ip and hostname will be changed which probably will make the application crash. So when I said it makes perfect sense to use containers in forked VMs in my last blog, I should have actually said that there is no sense in running non containerized applications inside a forked VM. Most applications will stop working if the underlying machine identity completely changed. Using containers will fix this. After all the application in the container is blissfully unaware of what’s going on with the actual operating system that runs the container.

For me this is a perfect example of the right time and place for two technologies to become available and enable a whole new set of possibilities which wouldn’t be available with just one of those technologies. One of the things that  fargo + containers makes possible is a real elastic datacenter.

There has been a lot of talk about elasticity, auto scaling and scale-out over the last couple of years. The problem has always been that you needed to provision a VM before you could start a new instance of your application. Auto scaling this took way to much time to scale-out in realtime. The only auto scaling I have seen was more like scheduled scaling. Either by using software to predict the load or by manually entering a schedule based on planned campaigns for example. Automated realtime scale-out is very rare at the moment. And I haven’t touched on scale-down yet. But what if it takes just 500 mS to spin up a new machine and then maybe another couple of mS to spin up a container inside that machine?  You could spin up a new machine for every request that comes in. Or when the load of the existing instances reaches a certain limit. There would be just a very short delay in answering the request. And the best part might be that once there are no more requests to handle you can remove the VM. No disks space wasted, no RAM and CPU cycles spend on idle machines. You just run the bare minimum to keep you application running and just spin up more machines whenever load requires to do so. That is the real elasticity we have been talking about for years!

I think that once fargo is released in a future ESX version we will see a whole lot of new solutions we didn’t even think about. I also think that the number of very small linux distributions will increase. Currently there is CoreOS which is solely build to run clustered docker applications. CoreOS might be the first, I predict it won’t be the last. The same goes for schedulers or resource manager like Kubernetes and Mesos. They will mature into very usable products. But there will be a whole lot of new solutions targeting different markets and use cases as well. To summarize: Virtualization was just the beginning, Cool times ahead when you’re working in IT.

Christiaan Roeleveld Virtualization Consultant

Let's talk!

Knowledge is key for our existence. This knowledge we use for disruptive innovation and changing organizations. Are you ready for change?

"*" indicates required fields

First name*
Last name*
Hidden