Previous month:
November 2011
Next month:
January 2012

December 2011

Time to Consider Scale Up in Virtualized Environments?

The recent announcement from AMD of their 16-core Opteron 6200 CPU and indeed Intel with their 10-core Xeon E7 indicates a resurgence of the scale up mentality. Indeed the virtualization bandwagon is partially responsible for fueling this rise.

While on the one hand we have every virtualization vendor touting server consolidation and datacenter efficiency using simple scale-out models based on x86 technology, it is also apparent that to get full efficiency, the density of virtual machines to physical hosts (VM:hypervisor host) needs to increase.

Pack in licensing and transformation costs, and it becomes increasingly difficult to create business cases that really make sense for an organization to invest in new hardware to take the best out of virtualization and ultimately the cloud - unless that very high density can be achieved.

Add to this the every increasing core counts still in the pipeline from Intel/AMD and challengers to the throne such as ARM with their 64-bit ARMv8 architecture. Dell, HP and indeed Google are tinkering with or have produced server designs that utilize ARM technology. The instant draw here is their much better power and thermal profile compared to Intel and AMD. With figures such as 2W per core being bandied around and up to 128 cores in a package, ARMv8 System-on-a-Chip (SoC) designs are nothing to be sneezed at.

That potentially opens up server designs with a huge number of cores within a single server chassis - an eight socket ARM design might have up to 1000+ cores in a 5U design at 2kW! Scale-up with power efficiencies. Sure we can't compare ARM directly to x86 high-end processors, but for strategic thinkers, this should be on their radar.


Why is this important for the CIO?

Many of the clients that I meet are constantly talking about scale-out, and indeed this is also a valid model for datacenters and applications with key advantages. However, much of this is also coming from the mass market marketing.

I believe that CIOs need to look beyond the mainstream hype. Scale-up has some very serious advantages, particularly in handling certain intensive workloads coherently, such as databases, analytics and indeed big data.

The old rules regarding scale-up are gradually being rewritten - by lowered licensing costs, commodity components, low powered CPU design, faster networks such as Infiniband and 40/100GbE. Combine this with the strong Linux adoption in the enterprise space and we have an old concept going through a rejuvenation cycle.

The Corporate Cloud Strategy will certainly benefit from a multi-pronged approach ensuring that compute power is where it is needed for the appropriate workloads. Indeed, from a sourcing side, many non-Intel processors have multiple foundries that can deliver the product - reducing reliance on a single vendor/foundry approach.

Those same scale-up systems will be able to run potentially far greater numbers of applications that are themselves already encapsulated (such as Java based applications running in their own JVM). Indeed, as core counts increase, these same trusty workhorses could benefit from a constant workload consolidation approach - ever more VMs/applications running on the same server footprint - with a simple processor change.

My last series of posts have been focused on making sure good old common sense, targeted at application-value delivery, is not forgotten. Older rationale should not just be discarded, but combined effectively with newer paradigms, such as Cloud, for dramatic effect in datacenter and application transformation programmes.

Intel would have you believe that x86 has successfully removed the need for any alternative processor architecture than theirs, even citing evidence in the form of older defunct legacy processors such as Alpha - hold on - did Intel not buy all Alpha intellectual property in 2001?. Well that is one way of getting rid of competition, although that same Alpha IP cropped up in later Intel designsCrying.

New disruptive platforms such as ARM (128 cores, 2W per core, SoC design), or indeed rejuvenated platforms such as SPARC T4 (8 threads per core, 8 cores) all offering either virtualization fo operating systems and/or applications should definitely be on the enterprise infrastructure architecture agendaWink.



The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Competition Heating up for VMware - Can one still char$$ge for a Hypervisor?

In all areas of the market, we are beginning to see hypervisors cropping up. Each have various pros and cons. However, the cumulative effect on the virtualization market is important to note for strategists. Even the new mobile virtualization platform of VMware is being directly attacked by Xen/KVM on ARM.

Although VMware is constantly touted as being at least somewhere between 2-4 years ahead of the market. I don't agree with that point of view - as time and competitor movements in the wider market are not linear in their behaviour.

Indeed even the stronghold of the Intel-VMware platform is being gradually encroached upon by ARM and AMD. They are building similar extensions in silicon as Intel, with the added advantage of open source hypervisors gaining traction in a development community that is many times the size of VMware's potentially.

It is a sobering thought that even VMware originally started out from academia and that the hypervisor itself is based on Linux. Academia is now firmly behind open source hypervisors. Once the likes of Red Hat, Citrix and Oracle start weighing in with their development expertise, and in Oracle's case with the added advantage of tuned hardware systems, it will be interesting to see if VMware is still the only game in town.


Why is this important for the CIO?

CIO's balancing the long term view against short term tactical needs, need to understand that the when one is looking at becoming Cloud capable, that VMware is not the only solution. The idea of "good enough" should be a strong motivator for product and solution selection.

Indeed, the CIO and team, would be well advised to verify if the savings they are expecting really will be delivered by a near-commodity hypervisor that has strong license costs versus the organisational need to be cost efficient, and tap into the marketing value of the cloud.

Interestingly, in a more holistic sense, the fact that open source hypervisors are continuing their trend in being available on every imaginable hardware platform, including mobile, is in itself a strategic factor. New challengers to Intel and AMD are cropping up, and indeed platforms that had faded into the background over 2009/2010 are surging ahead in 2011-2012 for high end enterprise workloads - as mentioned in the blog "A Resurgent SPARC Platform for Enterprise Cloud Workloads ".

The Corporate Cloud Strategy will certainly benefit from this type of thinking. It will highlight potential alternatives. Depending on the time horizon that the strategy is valid for, "good enough" may well be enough to realize the competitive advantage that is being targeted.

Certainly learning to adapt your organization for the realities of the cloud world requires time. Innovation built upon such enabling platforms requires not just a focus on the infrastructure but the application development environment and ultimately the software services that are consumed.

Remember it is the applications that deliver advantage. The quicker they are brought to market, and on platforms that allow cost efficiencies and agility, the better for the organization concerned. This in turn is leading to a stronger focus on appliances and engineered systems for enterprise virtualization....that's for another blog I thinkWink.



The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.