Virtualization Coming of Age
Business Continuity - a path to Private Cloud & VDI

What comes after Petascale Clouds….Exascale Clouds!

Imported from http://consultingblogs.emc.com/ published December 3 2010

While the world morphs into the tsunami that is Cloud, managing petabytes of information and petaflops of processing capability, there are areas where this is simply not enough. As a result, pioneering work is taking place to extend the boundaries of current computing and application development to power our way to the 1018 - exascale!

 

What’s Happening Out There?

Some areas and news that are indicative of the shape of things to come:

 

  1. The Exascale.org are working on pushing the software capabilities to drive this immense construct forwards.
  2. Breakthroughs such as by IBM through embedding optical communications in CPUs themselves to replace etched pathways opening the way for full 3D chip design. Think a cube of chips stacked that work as one with lots and lots of cores!
  3. Isilon scale-up/scale-out NAS used with a single unified namespace masking the complexity of RAID, volumes etc to provide practically unlimited storage space! This is being used extensively in exascale operations such as DNA sequencing and geospatial work to making such stellar Hollywood hits as Avatar. These are big projects folks!
  4. Form factors for server are also changing, witness the SGI “beyond the blade” design using their “stick” concept. A great blog on this from Mark Barrenechea, President & CEO of SGI, called “The Prism and the Pendulum. The SGI STIX is a far denser packaging of components than in mainstream blade offerings providing insanely dense configurations.

That may well mean we will be talking about cubes and not CPUs in the future that can talk to each other at the speed of light! If a current Intel Westemere-EP, with 6 cores and 12 threads per CPU, is stacked say 10 high (they are thin after allWink), that would provide in a single socket 60 cores/120 threads. Build that out to a 8 socket system, and we get 480 cores/960 threads in a single server chassis. See where this is going?

Marry that with the Petascale storage systems like Isilon (10PBytes in a single system) and we are seeing the emergence of Cloud-in-a-Box! The very nature of physical fabrics of Clouds could drastically change. In one fell swoop, new Cloud providers could be riding into the market that have more compute capacity than the whole of Google put together, and sell CPU cycles at basement prices!

With such breakthroughs, the future of the cloud infrastructure per se looks very rosy indeed. Most, if not all reservations, about scalability and reliability of Cloud infrastructures built using the hypervisor of your choice will be literally swept aside. I can’t really think of anything that would really stop you virtualizing workloads, aside from restrictions in vendor support, or archaic clustering limitations off the top off my head.

What does this mean for Corporate IT & CIOs?

While most are looking inside the virtualization industry to see what new stuff is cropping up, we should still keep a focus on the wider stage of technology in general. Evolutions in complementary areas, such as chip design, or in unrelated industries such as the movie making business, will create the acceptance and the demand for exa-dense and exascale computing and storage resources. Changes in memory technologies, particularly for large scale RAM-like units with properties of persistent storage, will fundamentally change the way we limit ourselves currently with spinning disk.

These wider changes have wave form effects on the virtualization industry, as previous limits in physical hardware are swept aside. Consumer demand for ever more portable computing that can do everything without constraints that PCs can do will also fuel the need for Clouds that can literally spike in demand to billions of users, and then shrink back down to practically nothing. It simply will not be feasible to do that all by yourselves/in-house.

This leads to the drive to hybrid Clouds, where a standing capacity is managed in a Private Cloud (on premise), and these mega demand spikes be accommodated by Public Cloud providers for a short period of time.

Traditional capacity planning approaches, and monitoring of spikes will simply not be accurate or fast enough. The software applications themselves need modifying to be aware of the surge in demand, request new resource instances, and the infrastructure to understand that it can only provide say 20%, and the 80% should be provisioned in the Public Cloud for a short period of time.

That in turn requires the “spinning up” of resources instantly by the Public Cloud provider, within the context of the organization requesting resources. As I mentioned in an earlier blog entitled “The Journey to the Cloud – The Need for Speed & the Private Cloud Platform, speed is definitely an enduring competitive advantage.

CIOs should be looking for deep virtualization implementations. The market is proven. Backroom tinkering, or very tentative testing initiatives should be changed to bold virtualization endeavours. Virtualize every x86 workload to start with, bearing in mind current restrictions related to real-time sensitive operations. Don’t accept from the business that things cannot be shut down. Don’t be afraid to ask for outside help. That can really get things moving.

The business, after all, has tasked you with providing a dramatically better service at dramatically lower costs. Voila! Ladies and Gentlemen, I present the solution – it’s called virtualization@scale.

Small virtualization clusters of say 6-8 nodes, should be seriously questioned, particularly if there are multiples of physical servers still running their workloads sitting around. Look to the maximum configurations of the said hypervisor platform, and build to those specifications.

You can trust those specifications (+/-10%Wink). EMC, for example, invests in some seriously large validation facilities. These are used to test the typical configurations for all kinds of industries and needs, and all the tweaking of the engineers is documented and released as architecture blueprints. You can use those as-is literally. There is no need to redo that testing again in-house. IT has an atypical desire to conduct “we need to do internal" tests even if the rest of the world has already tested. This literally is costing you money!

If on the other hand IT was able to characterize and document the characteristics of workloads better, then those could literally be given to a vendor for testing without the upfront invest and loss of time.

Workload and function characterisation is a serious advantage in that spending can be limited to the functions needed - think lean. Further, as new paradigms or delivery patforms come into play, these same specifications determine how rapidly your organisation can gain early adopter advantage!

The Cloud represents the ability of the CIO to treat IT as a business enabler. To do that, the Cloud needs to be embedded into all elements of the business. That entails that the organization running the Cloud needs to be business-savvy to spawn innovation, and cost-conscientious to reduce costs dramatically. Multi-disciplinary teams that cut across traditional IT silos, such as networking/storage/server need to grouped together for maximum benefits. Don’t forget the developers! They need to make sure that their applications can implicitly be dynamic and scale-on-demand, assuming that infrastructure is willing - that infrastructure is the Cloud and it is ready for prime time use!

The CIO should be fostering adoption of a Cloud Strategy for the organization to ensure end-2-end transformation of the organization. Ensuring cost avoidance and transforming IT-spend to sponsor development of revenue generating applications should be part of this strategy.

The CIO can foster early growth of business sponsors throughout the organization, promoting and defining use-cases for the business to leverage the Cloud value . Agility should be thought of holistically ensuring everything is possible to adjust and automate on the fly.

Don’t let your organization miss out on these relatively easy ways of radically enhancing competitiveness, while newer startups embrace 100% virtualization with the Cloud and effectively “steal” market share, erode brand value (especially by price-savvy consumers in the prevailing economy) and weaken market positioning.

CIOs should be fully sponsoring the change to full virtualization at every level, and support embedding of the Cloud in the fabric of the business and its IT needs. Complete CIO executive sponsorship is the critical piece that drives successful virtualization and transformation to Private Cloud!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.