Previous month:
March 2011
Next month:
May 2011

April 2011

Holding the Fort against Virtualization – Targeted CPU Power

Imported from http://consultingblogs.emc.com/ published Apr 22 2011
After writing about the impact of CPUs in at least two previous blogs (Multi-Core Gymnastics in a Cloud World and Terabyte…well Everything Tera Actually! (Part 1- the Servers)) about a year ago now, I wanted to post an update to the theme.
It is still surprising for me to meet customers that wonder whether virtualization can possibly help their IT environment. They take the leapfrogging of CPUs in their stride, and miss the point that these leapfrogs are targeted at them in particular. Those “workloads” that could never be virtualized are filed into the “can’t – won’t virtualize” file and then, well, filed!
The rationale that underpinned that decision is not revisited as part of a long term strategy to increase datacenter efficiency or reduce costs – until finance is knocking on the door asking for wholesale cuts across the board.
Recently Intel release the Xeon E7 processor Just when you thought the Nehalem with its 8 cores was really reaching those high-end workloads, think databases and big data computational applications, Intel has just upped the ante.
This E7 is really targeted at those last bastions holding out against virtualization. Being a 10 core processor package, a simple four processor server of 4 rack units becomes a roaring 40 core super server addressing 2TB of RAM. Major brands have moved to adopt this wickedly powerful processor - Cisco with its UCS B440 M2 High-Performance Blade Server or the rack mounted HP DL ProLiant DL900 with 80 processing cores and 2TB RAM. Make no mistake, these are truly serious processing servers for enterprise workloads.
This is a significant change in the evolution of the x86 platform:
  • 2006 - dual-core processors at 65nm
  • 2009 - 4 cores on 65-45nm
  • 2010 - 6-8 cores on 32nm
  • 2011 - 10 cores on 32nm
Notice anything here? I see a concentrated effort by the entire x86 industry to finally bring out the full potential of that architecture to support the mass wave of virtualization sweeping across the IT landscape. There is a lot of talk about compliance, security and what-have-you, but the fact still remains that until very recently not all workloads could be comfortably virtualized.
With the E7, we move towards that magic 99% virtualization rate, with the 1% left for tricky extreme load systems that require some level of re-architecture to be able to fit onto VMware – replatforming.
By the way, Intel is also not the only game in town. AMD also are making first rate processors. The current AMD Opteron 6100 processor (“Magny-Cours”) with its 12 cores, and the one everyone is waiting for the “Interlagos” with 16 cores coming this year. This just underpins how serious the industry is about getting “everything” virtualzed.
What does this all Mean for Virtualization and the Cloud?
Perhaps this does not sound like much, but when measured against what “old x86 servers” could do this really is remarkable. I recall from my own architecture days with Microsoft designing messaging systems for 60,000+ users concurrently. What a horrendous task that was. What came out were rack loads of servers, cables, KVM, network switches, and all that labeling work;-).
With Exchange 5.5 (that is going back a bit), we would have at least 60 mailbox servers for the 60,000 users – 1,000 users safely on a single server of 4 CPU and 128GB of RAM. I could probably get 20+ of those mailbox servers running on a single quad E7 processor system running VMware ESXi as a hypervisor. That means I could collapse perhaps 10 of those old racks with servers and cables into a single 4U server!
This is a sobering thought. With the current generation of common commercial software running in most datacenters this range of consolidation is still possible. Intel and AMD are taking the x86 markets by storm. IT decision makers should examine the macro effect of their actions in the industry:
  • RISC systems are being attacked by large scale-out x86 systems
  • High-end availability features reserved by Intel and HP in the Itanium are creeping into the x86 lines
  • Proprietary operating systems based applications running on RISC-Unix and mainframes are being made available on Linux/Windows that will run well on x86 systems even virtualized
  • Hypervisor vendors are tuning and refining their ability to handle high-end workloads safely and still retain virtualization features of high availability and mobility
  • Consolidation is now not only limited to physical-to-virtual (P2V) but to virtual-to-virtual (V2V) on more capable hardware (what I referred to as the continuous workload consolidation paradigm.
As consolidation ratios have reached such high potential on the x86 platform, the powers that be, have brought the high-end reliability features into the x86 environment. Datacenters that had critical business loads, think ERP and databases, could not really have imagined to move to the lowly x86 platform, and certainly not in a virtualized form.
That has just changed with a thunderclap. These systems compete well at all levels, and their pricing is vastly different than the prices set by the RISC/mainframe industry over the decades.
We are seeing equal improvements in our ability to exploit scale-out topologies, such as with Oracle RAC or EMC Greenplum with its massively parallel processing datawarehouse database. Coding languages are also going the multi-threaded and scale-out route - even that last 1% of workloads could be virtualized.
The x86 processors are not just for servers you know! We are seeing this commodity chip being placed in all kinds of enterprise systems. EMC is using Intel technology heavily in its own storage arrays providing fantastic performance, reliability, and price efficiency. The need for FPGA or PowerPC chips to power storage arrays just dropped further.
Don’t get me wrong, the non-x86 chips are great, with choc-loads of features. However, they are being migrated into the x86 family. I really do envisage that all the features of the Itanium will be migrated into the x86 – and the Itanium was one hell of a workhorse, able to compete with the mighty processor families out there – SPARC, PowerPC, mainframe RISC etc. It would not surprise me to see the Itanium come back as a new generation of x86 with a different name in a couple of years.
Why is this important for the CIO?
Seeing the transformative technologies coming out onto the market, CIOs are increasingly being exposed to the “you could do this” from the market and the “but, we shouldn’t do that” from slow internal IT organizations that are not well adapted to handle change.
I have never seen a time where the CIO needs to apply thoughtful strategies to drive through market efficiencies such as massive consolidation through virtualization and simultaneously balance the need to adopt this mind-set change with internal IT.
Internal IT needs to do some serious soul-searching. It can’t simply stick its collective head in the sand, or “try out “ technologies while the whole technology field has moved on a generation.
I have indicated previously that the CIO has to create the visionary bridge between where the IT organization currently is, and where it “needs” to be to service the business, remain relevant and drive change rather than simply following and dragging its heels.
Where virtualization is concerned, I personally feel that it is necessary to get as many workloads as possible virtualized.
The so-called strategic decisions regarding servers and hypervisor platform are important as enablers, but the goal should be to get maximum abstraction of the workload from the underlying hardware. This is worth the cost and the pain. Once you are there, then you are truly free to exercise sourcing and bargaining power over the physical element suppliers making up your IT landscape.
However, many organizations are still stuck on what server is the best, which hypervisor should I take, what about Amazon, what about Microsoft? Well what about them? Does it not make more sense to rationalize and virtualize your environment to the maximum, so that you can move onto these higher level abstractions?
Whilst that is underway, the IT industry will have found solutions to other areas such as compliance and data locality/security that you can literally step into.
CIOs should seriously consider getting outside help to rapidly move the organization to virtualization on commodity hardware i.e. x86. Be aware that this platform can sustain almost all workloads that are in typical datacenters.
Don’t let large data numbers daunt you. Don’t let internal IT railroad you into doing the same old expensive slow IT as in the past. Don’t get sidetracked. You have friends in the organization – the CFO/CTO/CEO.
CFOs are notorious for instituting widespread change backed up by hard economics. CTOs can stimulate and create the demand patterns that can only be serviced through elastic virtualized environments – Private Clouds. They can balance the hard economic cost cutting with the need to have flexible on-demand pay-as-you-use IT. The CEO wants to ensure shareholder return, and effectively have a successful firm for all stakeholders concerned.
Make the move to the “100% Virtualized Environment”. Push your vendors to ensure they make solutions running virtualized. Push vendors to provide licensing that fits the pay-as-you-use consumption model. Remember there is choice out there. Even for those notorious stacks such as ERP, database, and messaging - push for flexible licensing – otherwise list the alternatives that are waiting for your business if they do nothing!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.