5

The sorry state of server virtualization? Really @Gigaom?

Recently Gigaom published an article called The sorry state of server utilization and the impending post-hypervisor era, while I hate sending traffic to such a horribly hit targeted article, I just need to finally call out the writers at Gigaom, a site I used to really enjoy reading… one that I officially stop visiting as a matter of habit starting now.  While at least one of their writers is clearly anti-VMware, as many of her articles are misinformed at best, to put it nicely (to be fair she has had a couple that were conversation worthy) this article suggests that all server virtualization vendors have failed.  And why?  Because of low CPU utilization.

As most of my readers, I think, would agree – the bottle neck in modern x86 server virtualization is not the CPU, its more likely to be storage (of course depending on your workload).  To say that server virtualization is a failure, strictly by pointing to low CPU utilization rates is either a bold acknowledgement that this person should not be writing about virtualization, or a clear play to get hits and stir up controversy….which unfortunately I am playing right into…. DAMN.

The main point of the article, in another fairly clear anti-big-vendor angle, is that Linux containers will be the new breed of server virtualization.  If you are wondering what containers are, head over to Scott Lowe’s blog for a great introductory post.  Yes, this technology, if you are a Linux shop (and not everyone should be) is a great piece of technology and to couple this with the enterprise management features from VMware, Citrix, or Microsoft is a great way for IT departments to respond to demand.  But… that container, like its traditional virtual server relatives before it, still needs to read or write data from somewhere, the bottleneck of the entire stack will continue to be storage.

Flash based arrays and the various server side caching vendors such at Infinio and PernixData will start to improve the levels of CPU utilization by enabling a higher density of VMs to a single physical host.  In addition, CPU is generally not the main expense when in comes to x86 servers and virtual infrastructure, it’s, you guessed it storage!

Jonathan Frappier

Wanna be VMware geek since 3.5, VCAP5-DCD, VCP5-DCV, vExpert '13, also interested in social enterprise apps, all around IT guy, dad, husband.

Jonathan Frappier

Wanna be VMware geek since 3.5, VCAP5-DCD, VCP5-DCV, vExpert '13, also interested in social enterprise apps, all around IT guy, dad, husband.

  • Pingback: Are Containers our future?

  • http://www.cornfedsystems.com/ Frank W. Miller

    You make some very correct and insightful statements but I’ll use them to draw other conclusions. First, you are right about the storage bandwidth bottleneck, however, this is also Computer Science 101. Second, you’re correct about CPU utilization having little to do with whether virtualization has been a success or not or even what type of virtualization you are doing, hypervisor or sandboxes or containers or whatever. So, my conclusions are based on these correct statements.

    First, virtualization is absolutely indispensible. It has little to do with performance and everything to do with operational convenience. Virtualization has enabled a type of data center operation that was not available before and there’s no turning back.

    Second, this argument more than any I’ve seen recently points out the glaring problem with Intel CPUs in the data center. They are way overkill for what they are being used for. It would be better to match the CPU capability to the I/O bandwidths for a given server and anticipated workload. However, Intel continues to push power hungry furnaces for this task. The result is folks like Facebook are building datacenters on the Arctic Circle to keep the cool. All this said, my company is building ARM based servers so I’m obviously biased but the arguments I’m making here are based on numbers as much as my desire to solve what I think is this fundamental hardware mismatch.

    • http://www.virtxpert.com/ Jonathan Frappier

      I do agree modern x86 processors are much more powerful than they need to be. I am curious to see how ARM based servers play out for the average organization. Companies like Facebook, Google or Apple can afford the switch, but what about the average company with a few hundred servers that relies on x86 based operating systems and applications? Microsoft continuing to support ARM like they are doing for Windows 8 but extending that to their enterprise applications will go a long way towards that I think.

  • http://blog.databigbang.com/ Sebastian Wain

    (crossposting part of my comment on the GigaOM article=

    I think the author must separate between different use cases of server virtualization. For example, the main use in my company is to run a lot of Windows variations for QA, not only we never underutilised our servers, we buy more powerful servers every 1.5 year

    • http://www.virtxpert.com/ Jonathan Frappier

      Use case is always critical, and how we should start every project with requirements!