ITIL & the Private Cloud
Tera…well Everything Actually! (Part 2 - the Network – slower than the speed of light!)

Terabyte…well Everything Tera Actually! (Part 1- the Servers)

Imported from http://consultingblogs.emc.com/ published September 8 2010

Following the VMworld 2010 event in San Francisco and the overwhelming industry and customer enthusiasm for all things Cloud, I thought I'd drop in a couple of innovations in the IT industry that would continue to underpin this thought revolution. I’ll start with the server and then additional posts for other areas.

What’s Been Happening with Servers? – The Empire Strikes Back

Back in March 2010, with the launch of the Intel Xeon 5600, Dell launched a range of servers that would 'enable much greater efficiency in virtualization and unprecedented levels of consolidation'. Fast forward to June 2010, and HP announces a complete refresh of its best-selling Proliant range – ‘the empire strikes back Angry.

This alone is interesting as server vendors are creating designs that will exploit the feature rich Intel Xeon 7500 (8-core) and AMD Magny-Cours (12-core) processors. Taking the HP announcement in particular, with the HP BL680 G7 blade. This server is interesting in facilitating remediation of the weak-spot of server virtualization - RAM. Typically, most organizations embarking on virtualization tend to be biased towards CPU-heavy systems (hey chips are cheap – let’s get more of them!).

However, once virtualization and consolidation are underway in earnest, it is the memory, or rather the lack of it, forcing lower consolidation ratios per server than actually possible from a hypervisor software perspective.

Well looking at the HP BL680 G7 (not yet shipping), we see that we can have 4 of these blades in the venerable HP C7000 blade chassis. Each blade can have up to 1TB of RAM – yes you read that correctly. 4TB of RAM and 64 cores per chassis.

As HP so aptly describes ‘three times more virtual machines per blade and 73% less hardware per 1,000 virtual machines’ with features such as:

 

  • G7 ProLiants - data center consolidation ratio of 91:1.
  • Better energy efficiency.
  • HP’s Virtual Connect FlexFabric virtualization and automation software.
  • Include HP Virtual Connect FlexFabric 10Gb/24-Port module - any Fibre Channel, Ethernet and iSCSI network with one device instead of many interconnects.

For those that think this is a little small for their tastes, there is also the HP DL980 G7 with:

 

  • Intel Xeon 7500 (2.26GHz, 8 core Xeon X7560)
  • 8 socket
  • 128 memory slots (2TB RAM)

There are also many AMD variants being launched in this line, with equally remarkable specifications.

In the blog The Journey to the Cloud – The Need for Speed & the Private Cloud Platform and Multi-Core Gymnastics in a Cloud World I describe the notion that speed is a competitive advantage in the Cloud and this advantage should be exploited by every organization serious about consolidation.

I took the Acadia VBlock infrastructure as an example of this advanced integration of all infrastructure elements that make up a Cloud. The VBlock was able to use a blade with 384GB of RAM. That was at the beginning of 2010. This environment really targeted the issues of scaling virtualization to the needs of the enterprise. 10GigE and FCoE technologies allowed the number of cable interconnects to be kept to the minimum and still offer high availability and bandwidth aplenty.

Fast forward to September 2010, and the whole server industry is in on the act. That validates the approach that Cisco took with their UCS blades. These current announcements from HP of course take the crown in what is currently possible, but I expect that others will announce the availability of similar or more powerful configurations in smaller form factors by close of 2010 Wink

VBlock is of course much more than simply servers, but we will start to see the latecomers to the party validate other parts of the VBlock architecture through Q4/2010. By that time VBlock will have moved on with simple updates of key components in the Cloud infrastructure stack that was envisaged by Cisco, EMC, Intel and VMware at the end of 2009. This forward looking viewpoint from industry leaders is absolutely critical for organizations to know that they can partner with this multi-vendor stack, have less lock-in and be able to access a single support number for the entire shebang.

Why is this important for the virtualization industry and the CIO?

Firstly to understand the significance of the server evolution, we need to understand what VMware vSphere ESX 4.1 is actually specified to do in terms of its ‘maximums’.

 

Max. Value

Per Host

Virtual CPUs per ESX Host

512

Virtual Machines (VMs) per Host
(160 VMs when High Availability feature on)

320

Logical processors per Host

64

Max RAM per host

1TB

Per VM

Max virtual CPUS per VM

8

Max RAM per VM

255GB

Max disk size per VM

2TB

 

Typically in the ESX 3.x days, we were getting around 10-15 physical machines consolidated to 1 ESX host, so a consolidation ratio of 15:1. However, heavy I/O and database loads were not really encouraged to be consolidated. Consolidation was then only partially performed. With the move to ESX 4.0 we started to easily see enterprise workloads with enterprise I/O levels being migrated to the VMware virtualization platform.

This bucked the trend in the industry, as most other hypervisor vendors warned that heavy loads should be still run on dedicated physical servers. The enterprise software vendors (Microsoft, Oracle and others) indicated their support for using dedicated servers for all but the smallest and trivial workloads.

VMware continued to help enterprises worldwide consolidate their workloads, with SAP, Oracle, Microsoft and others being very effectively virtualized without loss of functionality or performance! Consolidation ratios of 15/20/30+:1 became the norm. Some of those vendors started to tout their own virtualization stacks.

Those are big numbers - 30 physical machines to 1 ESX server. That implies that for every 1,000 physical x86 servers, we would need just 34 ESX hosts. 1,000 physical servers need (2U servers, with 19 per 42U rack) 52 racks, excluding variance in server height, cabling etc. We could now have squeezed all that into 2 racks! That is truly amazing.

There are still nay sayers out there amongst the IT administrator and designer crowd, but as they say in the X Files – ‘the truth is out there!’

Fast forward to 2010, and we have the flagship ESX 4.1. There are many new features to this product. Taking those ESX 4.1 parameters into perspective we should be able to easily get conservatively 100 VMs per ESX server. That would mean that for those 1,000 physical servers we would now need just 10 ESX servers. That fits in a single rack! If this was a blade system, then probably about 1/2 of the rack is needed - enough space for some networking elements. Add another rack to the side with storage – well that starts to look like a VBlock Cool

Most organizations still are around the 20-25:1 consolidation ration. The reason for this is twofold. CPU, traditionally thought to have been the bottleneck, is no longer an issue on the ESX host server. Memory, storage and network I/O throughput were issues. This implied that to escape the limitations of ESX server memory, a new ESX host would be provisioned. This lowered the number of VMs that could be hosted per ESX host.

However, looking at the top-end limits of ESX 4.1 currently, it is clear server configurations lacked the memory density needed for high-end virtualization. Finally in Q4/2010 the constraints in the server hardware business are being lifted, and VMware vSphere can show what is really possibleto acheiven with its software capabilities.

This affects current projects organizations have underway in 2010-2011. How they are architected, the so-called constraints that IT administrators and designers are imposing, the recommendations that the software vendors are suggesting should all be evaluated very carefully with the macro dynamics in the virtualization industry. Rules are set to change!

Just to get this into perspective, taking Microsoft Exchange as the high-end messaging workload companies can’t imagine virtualizing as an example.

 

In 1999 using Exchange 5.5, typically servers with four Intel Pentium CPUs and 4GB RAM per 1000 users would be provisioned. For an enterprise with 30,000 users, that was at least 30 mailbox servers (and many routing and gateway servers in addition). Disaster recovery in high-end configurations required the duplication of the environment - so 60 mailbox servers needing 240 CPUs and 240GB of RAM in total.

Most of those processors were running at around 20-30% utilization. That suggests a 30,000 user messaging environment in its entirety would have fit on a single modern server (e.g. like the HP noted above).

Exchange 2010, the latest and greatest of the Exchange messaging servers (yes, I am also an Exchange architect – and like the product!) suggests around 500 users per core with 8GB of RAM needed for multi-role servers (using x64 processors and Windows 2008 R2).

We would need for the same 30,000 users, just 60 cores, and 8x eight-core processors. All of that would fit onto one, or to be really safe, 3 modern servers running VMware vSphere ESX4.1.

That is one heck of a reduction.

This evolution of the underlying infrastructure elements of the Cloud enable organizations to realize very serious savings all round. Power, space, cooling, asset management, CAPEX, OPEX – you name it, slice it and dice it as you please – this is money in the bank!

CIO/CFOs owe it to themselves to encourage and support/challenge their IT organizations to look into these new server offerings. Yes they are not cheap – I mean seriously, when was it cheap to buy a server with 1TB or RAM? However, the consolidation possibilities are enormous.

CIOs should seriously consider what is actually out there in the market. If the private cloud is the choice that has been adopted for many sensible reasons, then it is necessary to ensure that the infrastructure underpinning the business, its IT projects and innovation in general are of sufficient strength to support this deep consolidation.

It is not necessary to build out separate islands of computing to support these kinds of workloads. The savings are out there. The competitive advantages for your organization are out there. No reason to hold back any more. Make the leap to the 100% virtualized datacenter and from there to the Private Cloud!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.