The recent announcement from AMD of their 16-core Opteron 6200 CPU and indeed Intel with their 10-core Xeon E7 indicates a resurgence of the scale up mentality. Indeed the virtualization bandwagon is partially responsible for fueling this rise.
While on the one hand we have every virtualization vendor touting server consolidation and datacenter efficiency using simple scale-out models based on x86 technology, it is also apparent that to get full efficiency, the density of virtual machines to physical hosts (VM:hypervisor host) needs to increase.
Pack in licensing and transformation costs, and it becomes increasingly difficult to create business cases that really make sense for an organization to invest in new hardware to take the best out of virtualization and ultimately the cloud - unless that very high density can be achieved.
Add to this the every increasing core counts still in the pipeline from Intel/AMD and challengers to the throne such as ARM with their 64-bit ARMv8 architecture. Dell, HP and indeed Google are tinkering with or have produced server designs that utilize ARM technology. The instant draw here is their much better power and thermal profile compared to Intel and AMD. With figures such as 2W per core being bandied around and up to 128 cores in a package, ARMv8 System-on-a-Chip (SoC) designs are nothing to be sneezed at.
That potentially opens up server designs with a huge number of cores within a single server chassis - an eight socket ARM design might have up to 1000+ cores in a 5U design at 2kW! Scale-up with power efficiencies. Sure we can't compare ARM directly to x86 high-end processors, but for strategic thinkers, this should be on their radar.
Why is this important for the CIO?
I believe that CIOs need to look beyond the mainstream hype. Scale-up has some very serious advantages, particularly in handling certain intensive workloads coherently, such as databases, analytics and indeed big data.
The old rules regarding scale-up are gradually being rewritten - by lowered licensing costs, commodity components, low powered CPU design, faster networks such as Infiniband and 40/100GbE. Combine this with the strong Linux adoption in the enterprise space and we have an old concept going through a rejuvenation cycle.
The Corporate Cloud Strategy will certainly benefit from a multi-pronged approach ensuring that compute power is where it is needed for the appropriate workloads. Indeed, from a sourcing side, many non-Intel processors have multiple foundries that can deliver the product - reducing reliance on a single vendor/foundry approach.
Those same scale-up systems will be able to run potentially far greater numbers of applications that are themselves already encapsulated (such as Java based applications running in their own JVM). Indeed, as core counts increase, these same trusty workhorses could benefit from a constant workload consolidation approach - ever more VMs/applications running on the same server footprint - with a simple processor change.
My last series of posts have been focused on making sure good old common sense, targeted at application-value delivery, is not forgotten. Older rationale should not just be discarded, but combined effectively with newer paradigms, such as Cloud, for dramatic effect in datacenter and application transformation programmes.
Intel would have you believe that x86 has successfully removed the need for any alternative processor architecture than theirs, even citing evidence in the form of older defunct legacy processors such as Alpha - hold on - did Intel not buy all Alpha intellectual property in 2001?. Well that is one way of getting rid of competition, although that same Alpha IP cropped up in later Intel designs.
New disruptive platforms such as ARM (128 cores, 2W per core, SoC design), or indeed rejuvenated platforms such as SPARC T4 (8 threads per core, 8 cores) all offering either virtualization fo operating systems and/or applications should definitely be on the enterprise infrastructure architecture agenda.