Previous month:
May 2010
Next month:
July 2010

June 2010

Multi-Core Gymnastics in a Cloud World

Imported from published June 27 2010
I would like to round up some really important announcements recently in the world of the x86 processors and networking amongst others regarding their influence on the nature of the Cloud, namely:
Cool stuff and very cloud relevant when examining density. Looking at the some of the blogs I posted recently on the subject of scalability, IT renaissance, new paradigms and their shift on our IT perceptions, the nature or competitive advantage based on speed:
The enablers to allow us to spring into the Cloud world are essentially there. We are moving beyond any capability of processing in the virtualization world that we had before.
One vendor after another in the fields of processors, storage, networking, datacenter designers, cabling and standards groups are providing the necessary energy to spring into a vastly different world of IT. So the technology part is more or less on track to deliver the Cloud in all its different flavors.
The practical application of this power is shown can be seen in the links below. The sum of parts and balanced system design paradigms are critical competitive advantage determinants!
Multi-core flexing is a key enabling driver for moving to the private cloud certainly. However, more importantly it is a key driver for the transition from Private to Public Clouds and the associated Cloud Service Providers.
Think of the following in future when we have thousands of cores on a single board and the ability to virtualize a hundred thousand virtual machines on a single physical machine:
  1. Who can afford these extremely dense configurations in the future?
  2. Dense configurations are needed for the economies of scale to drive down cost for the service consumer
  3. Density is a direct factor of the application tiers that can be successfully transitioned to the cloud – think 100TB databases with hundreds of driving database instances!
  4. Does it make sense to have your own IT datacenter in the future when the Cloud Service Providers (CSPs) have access to the massive industrial Cloud factory plants – and can add more as needed quickly?
  5. Is this the precursor of moving from today’s mega-datacenters (with 10K+ servers) to Hyper-Datacenters (millions of processing cores)?
  6. Are we really that far away from the Planetary Cloud: The Rise of the Billion-Node Machine?
Why are these deliberations important for the Cxx decision makers? Well, if IT is morphing to IT-aaS, then one needs to carefully think about the current IT value chain that is being employed in organizations of all kinds. Does it make sense to invest in the current mode of thinking about IT, or should there be a massive investment in cloud paradigms for your organization? The first step, and a very important one it is too, is the move to the Private Cloud.
There is a substantial level of learning that is required from the IT departments all the way to the CIO/CTO of the organization. New ways of providing services, new ways of thinking about your supply chain for IT, new demand vectors in the organization, new ways of thinking of competitive advantage. Perhaps there are other means to utilize IT ‘size and experience’ as a cloud provider yourself within the industry concerned. The very definition of a ‘market’ and ‘market segmentation’ is open to interpretation.
This learning is essential for your organization to determine the best way of using the assets you currently have, and preparing yourself for the jump to Public/Hybrid Clouds and divestiture of your IT capital assets sensibly. What service models you will then employ? Which IT management techniques will be used? Generally being aware of the IT resources that you as CIO/CTO have at your disposal to solve the challenges business expects you to deliver on will be critical knowledge!
Project managers, business transformation leaders, developers and creative artists, product innovators and new product development shops need to think about things on a scale that has been hitherto unprecedented. Forgot the old mantra of ‘oh we can’t do that – would be too expensive, complex, technology does not exist etc.’ to ‘everything is possible – let’s see how we can do that’. All the old barriers to innovation (technologically speaking) are crumbling.
Thinking about the Cloud is very important at the Cxx level right now. This is more than simply marketing, product pushing, or the next hype cycle. The Cloud provides an abstraction model upon which a CIO and the Business in general can have a meaningful dialogue for future deployment of resources (perhaps as far out as the next 10-15 years even).
There is a lot of CIO attention on the ‘how’ to get to the Cloud and indeed with ‘which’ technologies. However, technology is looking after itself very nicely at the moment, thank you. The area where I see few clients expressing intentions is in ‘the Cloud is there now – what do we do with it?’
This mindset shifting, and reframing of existing challenges will be the key to energizing the ability of the CIO/CTO to be able to challenge the Business for new ideas (instead of the other way round currently)
What would the Business would do if all the current constraints, annoyances, IT options were removed?
Business is 100% free to innovate how and when it wants at the pace it wants to. IT is not just the cost center, or simply the enabler for the Business, it is the business! IT is ready to reconfigure assets on the fly (the Cloud’s elastic scalability characteristic) to put those plans into action immediately! IT is ready and open for Business!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

The Journey to the Cloud – Cloud Platform Design Patterns

Imported from published June 24 2010


As I am typing out of the EMC offices in Berlin, I can see in front of my eyes….the fantastic goal that Germany scored against Ghana last night. Wow, what a game[<:o)]

A second thought that comes into my mind is the tremendous wave of information that the football (soccer) World Cup has spawned throughout the globe. It is exactly this type of dynamic that Public Cloud environments were designed for. The ability to have an infrastructure that dynamically flexes from zero to a billion hits a second and probably more in every possible media format that one can imagine!

In many of the discussions I have with clients regarding cloud adoption, there is still the issue of clouds being new and that represents a fundamental risk to the operating environment of the firm. Clearly issues of security rank among the top concerns as well as the idea of multi-tenancy. Indeed, another question that we hear is why EMC for the Cloud? Well let me answer the various questions as I see them

Looking back on the environments that were designed for that massive amount of usage with an on-demand model in mind, there came into my mind the idea that there are other design patterns for Cloud that have been around in infant form for some time. The banks and credit card companies have the large scale universally accepted network in the form of ATMs, Merchants, Retailers, Card Issuers etc. This is probably one of the largest scale Private Clouds that comes into my mind.

On the Public Cloud side, the entire Internet itself could be viewed this way. TV networks themselves could be considered a Public Cloud of sorts, although some of the atypical properties associated with clouds could arguably be said to be missing.

Looking to hybrid clouds, the idea of the airline reservation systems being accessible through publicly available web interfaces is a prime example. Through this ‘Cloud’ the enormous resources in the actual data centers belonging to the various enterprises such as American Airlines, Lufthansa etc. are available although they are actually private resources held closely by those very firms.

This clearly indicates that Cloud adoption has worked in a fashion for all the above organizations on mind-boggling scales. There should not be any question that Cloud exists, and is evolving. There should equally for senior management be no doubt that Cloud must be on the corporate strategic agenda.

It is there, it is evolving, and those firms actively moving to adoption and usage are also the innovators of new business models sweeping away the debris of archaic 'IT infrastructure thoughts' in their respective industries.

They are very flexible, they are very fast, and by the time you realize they are at your doorstep it may well be too late to do anything about their business disrupting models. Consumers of all sorts are eager for these changes – they see tremendous savings to be made at the check-out counter and ultimately more money and value left in their own wallets/purses. That has got to be a good reason for adoption!

This does not exclude the obvious Cloud providers that come to mind, the Amazons, Googles, Microsofts and Terremark's of this world. They clearly have a very strong role to play in the areas of choice and fit-for-use for your organization. 

There was another example that came into my mind - a pre-cursor to the current cloud concepts that are penetrating corporate thinking due to the enormous potential value that could be extracted. The example that came to mind was EMC’s very own Centera platform!

The Centera product was originally from a small innovative firm called FilePool founded in the late 90s, based around the Brussels, Belgium (Mechelen) area in Europe, by Jan Van Riel and Paul Carpentier. EMC acquired FilePool for its ground-breaking Content Addressable Storage, CAS technology in April 2001.

The founders have been reunited in March 2008 in another company called Caringo and continue CAS technology innovation. I actually had the pleasure of visiting the Mechelen facilities after EMC acquired the firm. A great development spirit with some really forward looking thought leaders happily chatting about CAS and what they were going to do next – oh and did I mention there were so many Centera systems in one room[:D]

Centera is built around the ideas of simple storage and networking nodes pooled using a fantastic operating system called CentraStar. This system, built up in rack after rack, operates as one large unit, with open Centera APIs allowing access from a huge array of platforms and the web of course to be able to store data principally.

In the latest iteration with the release of the EMC Centera® Virtual Archive we start to see the Centera ability to manage Petabytes of information scaled to new heights, with policy driven archives being tightly managed to offer secure mass storage of fixed content. Looking at some of the features (I quote):




Multi-petabyte capacity

Scale easily up to four clusters to meet expanding growth requirements.

Federation of multiple clusters

Add capacity without service disruption.

Federation managed as one archive

Gain management efficiencies.

Application interaction with single virtual archive

Deliver seamless access to information.

Multi-data center support

Overcome space, distance, and technology limitations.


One can see the Cloud elements regarding multi-tenancy, federation, massive scalability and flexibility in configuration from very small to the largest federated archive platform for fixed content.

Clearly the modern interpretation of the Cloud is a more sophisticated construct simply through more sophisticated forms of information and consumption behavior that dwarfs that of the past. The challenges surrounding information management are similarly complex and multi-faceted. However, the Cloud paradigms of choice, efficiency and control through vastly simplified interfaces are designed to tackle ever increasing scale in a more information-efficient fashion.

EMC, never standing still, has taken the cloud storage ideas further through the Atmos offerings particularly in the policy based arena, pay-as-you-go consumption model and multi-tenancy capability. Infinite scale is also definitely nice to have.

So for senior management, the ideas surrounding the Cloud and its appropriateness for their business should not be a show stopper – there are well established design patterns out there. Clearly the ideas of the Cloud continue to evolve as industrious humans everywhere force evolution through use.

The challenges of managing the cloud are well established, as are many of the solution elements and frameworks to facilitate a smooth transition to the associated new infrastructure paradigms. The ecosystem surrounding Cloud offerings is also evolving, with many players working together to be able to drive value through the Cloud proposition.

EMC Consulting is leading the charge in getting management to understand and envision a future state for their organization using such constructs such that they remain competitively relevant. We are working hard with other industry leaders to help convey the capabilities inherent in the new paradigms and indeed using this to shape the products and solutions of the future.

EMC has a rich tradition of virtualization in various forms, cloud technologies built right into the core infrastructure products, and a portfolio of information management solutions that is second to none in the market. That's why EMC is so well placed to approach the Cloud from the information perspective to be able to ensure that there are solutions to actual problems both now, and those arising in the growing information universe of the future.

However, the current pallet of products, solutions and integrated offerings already offers the facility for organizations to rapidly step onto the Cloud and be instrumental in shaping future business models. They are powerful, scalable and cost effective right from day-1.



The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

The Journey to the Cloud – The Need for Speed & the Private Cloud Platform

Imported from published June 15 2010


While cruising on the German autobahn/motorway over the weekend at 280km/h, and arriving home at record speed from a wedding Surprise, I was reminded how the ability to be able to travel rapidly in Germany, traffic permitting, is a strategic advantage. The original fast road network was built to convey troops and vehicles at best speed to any point of strategic significance during World War II.

Their design was such that they would require low maintenance using specially reinforced concrete, be able to handle vehicles moving at speed on curves by angling the road, and still preserve the countryside where possible by not cutting straight through the hills, forests and mountains.

As I am writing this, I am waiting to board a plane that has been delayed due to French air traffic controllers going on strike. The looming delay is in stark contrast to the ease of travelling at speed. Real world analogies are interesting to note particularly as they are mirrored in the IT and virtual infrastructure worlds.

These points are of particular relevance when one is designing a Cloud infrastructure platform. The inherent elements, their integration and operation can result in either serious delays or be able to deliver the competitive advantage of speed. Competitive advantage being of critical significance to organizations around the world implies the ‘Need for Speed’.

In the blog ‘The Journey to the Cloud - Dual Vendor Strategy & VBlock Integrated Private Cloud Platform’, I discussed the traditional idea of following a dual vendor strategy in infrastructure acquisition, and how that plays out in highly integrated Cloud infrastructure platforms such as the Cisco/EMC/VMware VBlock.

A further element of the Cloud platform is the ability to be able to operate at tremendous speeds bearing in mind the very high concurrent workloads running. This requires one of at least two approaches to create such an infrastructure:


  1. Build out your own Cloud platform infrastructure and processes using commodity hardware and develop processes – and hope for the needed speed under peak & average loads
    • An example of this is typically network switch, SAN switch, servers/blades, cabling, storage and a variety of management tools.
    • Disaster recovery and business continuance features are mandatory with such concurrent workloads.
  2. Use a multi-vendor integrated Cloud platform infrastructure and align processes suiting your organizational needs – let the vendors with their greater focused resources guarantee the speeds under all dynamic conditions within top-end parameters.
    • An example of this is the Cisco/EMC/VMware VBlock construct.

The approaches sound similar to most organizations as they employ very gifted technical administrators, designers and architects. However, the effort and potential disruption to the organization with such a substantial undertaking can be immense. One further element that is typically missing is the appreciation of scale – what happens when there are large scale spikes in load and can the processes also scale as needed? Not simple questions to answer actually.

Further the ‘build your own’ approach requires substantial effort to be able to create an end-2-end resilient infrastructure encapsulating disaster recovery concepts and flex-on-demand features to cope with load or demand spikes. All elements have to be verified on compatibility matrices and that includes firmware, drivers, virtualization platform and networking elements.

This is a lot of work we are talking about folks. It is possible that you get a good-enough-for-now infrastructure, but the scaling issues are already being designed inherently in such a patchwork infrastructure simply through the approach. Knowing and working with these gifted people I have no doubt that eventually they would succeed. That process takes considerable time and commitment from the organization.

Thinking about this from a business perspective, I see this has long time scales involved, my time-2-value is far from short, and after significant corporate investment the final result is also not clear in terms of deliverables. There is no one to ask questions should the core engineering team be broken up, reassigned, or ‘outsourced’ over time. Support channels are not clear. When there is a problem, will the patchwork of vendors bounce the problem around? New workloads coming onto the platform may require rework in the form of qualification, redesign, and major component/platform replacement if not fit-for-purpose. These are very serious issues resulting in a potential for internal stakeholders to shun the ‘built internally’ virtualization platform.

This also causes ripple effects in an organization asked to respond structurally to this massive engineering effort, not perhaps having the resources available. Indeed the IT organization itself may have the same concerns in resources. The key negative effect here is to determine if business-as-usual (BAU) will be affected. These effects are not to be underestimated.

Keep in mind that the final objective was not to have a massive engineering effort, but the capability to decouple workloads from their physical infrastructure and be able to rapidly provision new workloads in a highly effective manner whilst eliminating/repurposing wasted resources. The strategic aim was to have agility and flexibility with a keen cost efficiency focus.

In many customer sites I can observe that there is reluctance in using such an unproven platform. The business in particular, if it does not have a high trust in the IT organization, will respond by delaying new virtual machines deployment preferring instead a physical server. Existing physical machines and their workloads will suffer delays in being migrated to the virtual infrastructure (P2V – Physical-2-Virtual conversion).

Over time this will result in double cost structures being endured, for the old physical environment and for the new virtual infrastructure environment that is not being used to capacity. This is not good news for the business. Indeed, critical market opportunities may be lost by this stalling of the transformation to the virtual infrastructure. We haven’t even discussed security or data protection (backup and restore) yetConfused.

Until the release of the multi-vendor integrated Cloud platform there were few partially integrated solutions in the market. The VBlock as the first of its kind integrated Cloud infrastructure platform has some significant advantages. Bearing in mind we are talking about workloads running in a container of some sort such as a virtual machine, there is already a complete decoupling of the workload from the physical infrastructure. Taking the same scenario as above, which is based on actual experience of many virtual infrastructure deployments in the previous years, we can see how that plays out with the VBlock.

The VBlock is engineered from the ground up to be able to run thousands of workloads concurrently with all the associated networking and storage demands covered. Further, inherent in its design are all the disaster recovery, business continuance and management tool framework from day 1 of its operation. Scale and performance are already there. The ability to cope with spikes is covered through the integration of high-end components in specific reference configurations (known as models).

As the VBlock is built with reference architectures in mind, all software and hardware is guaranteed and supported through a single support channel. There is nothing for the organization to do. On an ongoing basis, new firmware, software (such as VMware vSphere itself) and hardware continue to be qualified and tested on a 24 hour basis. This ensures backward and forward compatibility. Again, the organization does not have to do this. From a strategy point of view, the organization has managed in this case to ‘acquire’ the entire engineering resources of Cisco, EMC and VMware to ensure that their Cloud platform remains up and running.

New infrastructure management concepts, networking technologies, storage and data backup options are validated and then made available on the VBlock platform. Again, the organization does not need to think about these activities in the form of the VBlock construct.

Enough about technology (I could really go further on that as needed). Regarding the business, the true cloud driver, there are significant financial, tactical and strategic advantages. In a short list these are outlined below:


1) Financial Advantages


a. Don’t need huge engineering project to be launched, when the solution can be bought off the shelf


b. No need for highly specialized engineer/management resources to be recruited


c. Short term engagement with specialized consultants to define processes


2) Tactical Advantages


a. Don’t need to strip teams to staff the engineering effort needed


b. Business as usual is not negatively affected


c. Platform rapidly in place such that physical server consolidation can be undertaken at the pace the organization can maintain


3) Strategic Advantages


a. No delays in running concurrent workloads = Customer Satisfaction


b. Reuse concept strongly embedded in infrastructure = Strategic Platform built for multiple usage scenarios supporting growing the business


c. Agility and flexibility inherent in end-2-end form


d. Disaster recovery and business continuance engineered into platform = Compliance and Governance concerns (some not all of these – this is a big area)

The need for speed in all its facets translates to enduring competitive advantage. Organizations should seriously evaluate their approaches to Cloud infrastructure build-out and the structuring of internal engineering efforts. A focused engagement with consultants and the VBlock construct will bring an organization a lot further for a lot less cost. EMC Consulting can of course engage in this envisioning process taking a cradle-2-grave approach. We are not the only ones. There are other qualified vendors capable of delivering VBlock infrastructures. Keep in mind that it is really the processes and the organizational alignment that brings things to life!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

The Journey to the Cloud - Dual Vendor Strategy & VBlock Integrated Private Cloud Platform

Imported from published June 07 2010


In several previous blogs typical themes encountered with clients on their own journey to the cloud were elaborated. The last weeks have provided the chance to discuss the VBlock infrastructure as a fully integrated platform upon which to build the client private cloud strategic vision. Clients have a very clear appreciation of how they can utilize the private cloud environment and add value downstream to their own clients.

However, a recurring theme that arises in discussion is ‘how to deal with the organizations’ dual vendor strategy’. This is a perfectly valid question as this dual vendor strategy has underpinned the acquisition and supplier landscape strategy of these organizations for many years. Indeed budgetary assumptions for CAPEX and OPEX expenditure are potentially forecast on the assumption that there will be at least two vendors in the picture and that they may be performing the same role, for different sets of data/applications/clients, and thereby effectively de-risking the acquisition.

This seems to make sense in a perfectly equitable world. Indeed it also seems sensible from a stakeholder and risk management approach. However, the basic premise of an organization is to be frugal with resources such that by adding value to the raw inputs through processing within the organization, greater value can be derived for the key stakeholders of the organization.

This would imply that pursuing a dual vendor strategy in a carte-blanche manner would potentially blind an organization to strategic platforms/partners. Strategic in the sense they support and allow fulfillment of an organization’s mission and business objectives; not simply strategic in the sense that they are the selected partner of the concern through IT selection criteria.

This is an important difference to make. The consultant plays a critical role in enriching the experience of any pure technical play, and portraying how said solution can enhance the ability of the organization to meet its objectives in a manner that is equitable to its key stakeholders. The consultant effectively brings the strategic side of the equation to life.

Into this area comes the integrated offering of Cisco, EMC and VMware, through the Virtual Computing Environment Coalition initiative in the form of the VBlock reference architectures. In the meantime, there are also other VBlock-like integrated offerings coming to market with other leading-edge components e.g. Cisco, NetApp and VMware.

Make no mistake the VBLock can be and is a fully integrated offering that will allow an organization to build out the infrastructure elements of its Private Cloud strategy. The Private Cloud itself is an integrated concept whereby no single vendor, currently, can provide all possible components therein. Therefore, integration is a key word. This integration is most strongly shown in the desire to have a joint service center for streamlined support - with a no-passing-the-buck attitude.

Integration of leading ‘best-of-breed’ components in a tested reference architecture format allows the customer to literally have this environment ‘dropped in’ and literally allow most datacenter assets and virtual desktops, desktop environment assets to be densely consolidated in a datacenter environment and allow economies of scale to be cascaded to all virtualized assets. This includes virtual servers, network, desktops, virtualized applications, virtualized storage, security and indeed the virtual datacenter.

In the discussions with clients there is a common misapprehension that the VBlock is an EMC construct only. This is patently wrong. EMC has, by partnering with leading vendors, created an ecosystem that is tested and proved to be ‘fit-for-purpose’ to de-risk design and implementation for the customers. This construct has been assembled based on customer feedback regarding the difficulties they had in assembling their own virtual and ultimately private cloud infrastructure combined with the tremendous amount of disruption to their existing IT operations.

The VBlock is an integrated offering that fully respects multi-vendor acquisition and partnering policies of companies. VBlock does not belong to EMC, and indeed neither to Cisco or VMware. It is an integrated, tested and blueprinted architecture. VCE fully expects other integrated offerings to be announced in time with elements from completely different vendors. This is not excluded – indeed VCE was created to encourage and enfoster this type of joint operation to make things simpler for clients.

In hindsight, simply dismissing the VBlock as a ‘single vendor play’ is not just plain wrong, but the IT organization may, in the current intensively competitive market environment, be doing a huge disservice to the organization (the business side) that has entrusted the role of creating and sustaining its IT competitive advantage to a CIO/CTO.

Looking under the covers of the integrated offering, one sees the clear need to clear the current ‘thought & strategy’ cobwebs on building out IT capabilities. The clear need to move away from ‘IT religion’ to what works best for organizational contingencies based on facts and ultimately putting it to the test in the form of a Proof of Concept is mapped out and established.

For our clients at EMC Consulting, once we have gotten away from the idea of a single vendor and look at the solution in terms of the unique challenges being faced by the firm, we can start the envisioning phase of the journey to the private cloud. This examines the competitive needs of the organization through the eyes of a CIO/CTO and senior management to see how the cloud context can be established. The cultural transition for traditional IT groups to the fully decoupled virtualized Cloud environment is a critical success factor that needs to be put openly on the table.

In essence,


  • Current IT strategy for multi-vendor management can be respected
  • Still bring tremendous game-changing functionality in the form of the private cloud
  • Vendor lock-in is a complete contradiction to the Private Cloud concept and the VBlock integration
  • Using the most appropriate private cloud infrastructure commensurate to the requirements of the organization resonates with the VBlock integration concept and the idea of multiple VBlock models (there is no one-size-fits-all concept).
  • Organizations should not simply block/reject a proven field tested integrated solution in light of the ‘dual vendor’ carte-blanche strategy – keep an open mind to help yourselves.

EMC Consulting is there to help you make that journey a successful one. The journey requires a joint process with the client to envision the desired ‘future state’. Leading market players take this customer input very seriously to ensure the next generation integrated offerings provide the required functions and give lots more choice for clients in their specific implementations without the risk of going-it-alone.


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.