While cruising on the German autobahn/motorway over the weekend at 280km/h, and arriving home at record speed from a wedding , I was reminded how the ability to be able to travel rapidly in Germany, traffic permitting, is a strategic advantage. The original fast road network was built to convey troops and vehicles at best speed to any point of strategic significance during World War II.
Their design was such that they would require low maintenance using specially reinforced concrete, be able to handle vehicles moving at speed on curves by angling the road, and still preserve the countryside where possible by not cutting straight through the hills, forests and mountains.
As I am writing this, I am waiting to board a plane that has been delayed due to French air traffic controllers going on strike. The looming delay is in stark contrast to the ease of travelling at speed. Real world analogies are interesting to note particularly as they are mirrored in the IT and virtual infrastructure worlds.
These points are of particular relevance when one is designing a Cloud infrastructure platform. The inherent elements, their integration and operation can result in either serious delays or be able to deliver the competitive advantage of speed. Competitive advantage being of critical significance to organizations around the world implies the ‘Need for Speed’.
In the blog ‘The Journey to the Cloud - Dual Vendor Strategy & VBlock Integrated Private Cloud Platform’, I discussed the traditional idea of following a dual vendor strategy in infrastructure acquisition, and how that plays out in highly integrated Cloud infrastructure platforms such as the Cisco/EMC/VMware VBlock.
A further element of the Cloud platform is the ability to be able to operate at tremendous speeds bearing in mind the very high concurrent workloads running. This requires one of at least two approaches to create such an infrastructure:
Build out your own Cloud platform infrastructure and processes using commodity hardware and develop processes – and hope for the needed speed under peak & average loads
An example of this is typically network switch, SAN switch, servers/blades, cabling, storage and a variety of management tools.
Disaster recovery and business continuance features are mandatory with such concurrent workloads.
Use a multi-vendor integrated Cloud platform infrastructure and align processes suiting your organizational needs – let the vendors with their greater focused resources guarantee the speeds under all dynamic conditions within top-end parameters.
An example of this is the Cisco/EMC/VMware VBlock construct.
The approaches sound similar to most organizations as they employ very gifted technical administrators, designers and architects. However, the effort and potential disruption to the organization with such a substantial undertaking can be immense. One further element that is typically missing is the appreciation of scale – what happens when there are large scale spikes in load and can the processes also scale as needed? Not simple questions to answer actually.
Further the ‘build your own’ approach requires substantial effort to be able to create an end-2-end resilient infrastructure encapsulating disaster recovery concepts and flex-on-demand features to cope with load or demand spikes. All elements have to be verified on compatibility matrices and that includes firmware, drivers, virtualization platform and networking elements.
This is a lot of work we are talking about folks. It is possible that you get a good-enough-for-now infrastructure, but the scaling issues are already being designed inherently in such a patchwork infrastructure simply through the approach. Knowing and working with these gifted people I have no doubt that eventually they would succeed. That process takes considerable time and commitment from the organization.
Thinking about this from a business perspective, I see this has long time scales involved, my time-2-value is far from short, and after significant corporate investment the final result is also not clear in terms of deliverables. There is no one to ask questions should the core engineering team be broken up, reassigned, or ‘outsourced’ over time. Support channels are not clear. When there is a problem, will the patchwork of vendors bounce the problem around? New workloads coming onto the platform may require rework in the form of qualification, redesign, and major component/platform replacement if not fit-for-purpose. These are very serious issues resulting in a potential for internal stakeholders to shun the ‘built internally’ virtualization platform.
This also causes ripple effects in an organization asked to respond structurally to this massive engineering effort, not perhaps having the resources available. Indeed the IT organization itself may have the same concerns in resources. The key negative effect here is to determine if business-as-usual (BAU) will be affected. These effects are not to be underestimated.
Keep in mind that the final objective was not to have a massive engineering effort, but the capability to decouple workloads from their physical infrastructure and be able to rapidly provision new workloads in a highly effective manner whilst eliminating/repurposing wasted resources. The strategic aim was to have agility and flexibility with a keen cost efficiency focus.
In many customer sites I can observe that there is reluctance in using such an unproven platform. The business in particular, if it does not have a high trust in the IT organization, will respond by delaying new virtual machines deployment preferring instead a physical server. Existing physical machines and their workloads will suffer delays in being migrated to the virtual infrastructure (P2V – Physical-2-Virtual conversion).
Over time this will result in double cost structures being endured, for the old physical environment and for the new virtual infrastructure environment that is not being used to capacity. This is not good news for the business. Indeed, critical market opportunities may be lost by this stalling of the transformation to the virtual infrastructure. We haven’t even discussed security or data protection (backup and restore) yet.
Until the release of the multi-vendor integrated Cloud platform there were few partially integrated solutions in the market. The VBlock as the first of its kind integrated Cloud infrastructure platform has some significant advantages. Bearing in mind we are talking about workloads running in a container of some sort such as a virtual machine, there is already a complete decoupling of the workload from the physical infrastructure. Taking the same scenario as above, which is based on actual experience of many virtual infrastructure deployments in the previous years, we can see how that plays out with the VBlock.
The VBlock is engineered from the ground up to be able to run thousands of workloads concurrently with all the associated networking and storage demands covered. Further, inherent in its design are all the disaster recovery, business continuance and management tool framework from day 1 of its operation. Scale and performance are already there. The ability to cope with spikes is covered through the integration of high-end components in specific reference configurations (known as models).
As the VBlock is built with reference architectures in mind, all software and hardware is guaranteed and supported through a single support channel. There is nothing for the organization to do. On an ongoing basis, new firmware, software (such as VMware vSphere itself) and hardware continue to be qualified and tested on a 24 hour basis. This ensures backward and forward compatibility. Again, the organization does not have to do this. From a strategy point of view, the organization has managed in this case to ‘acquire’ the entire engineering resources of Cisco, EMC and VMware to ensure that their Cloud platform remains up and running.
New infrastructure management concepts, networking technologies, storage and data backup options are validated and then made available on the VBlock platform. Again, the organization does not need to think about these activities in the form of the VBlock construct.
Enough about technology (I could really go further on that as needed). Regarding the business, the true cloud driver, there are significant financial, tactical and strategic advantages. In a short list these are outlined below:
1) Financial Advantages
a. Don’t need huge engineering project to be launched, when the solution can be bought off the shelf
b. No need for highly specialized engineer/management resources to be recruited
c. Short term engagement with specialized consultants to define processes
2) Tactical Advantages
a. Don’t need to strip teams to staff the engineering effort needed
b. Business as usual is not negatively affected
c. Platform rapidly in place such that physical server consolidation can be undertaken at the pace the organization can maintain
3) Strategic Advantages
a. No delays in running concurrent workloads = Customer Satisfaction
b. Reuse concept strongly embedded in infrastructure = Strategic Platform built for multiple usage scenarios supporting growing the business
c. Agility and flexibility inherent in end-2-end form
d. Disaster recovery and business continuance engineered into platform = Compliance and Governance concerns (some not all of these – this is a big area)
The need for speed in all its facets translates to enduring competitive advantage. Organizations should seriously evaluate their approaches to Cloud infrastructure build-out and the structuring of internal engineering efforts. A focused engagement with consultants and the VBlock construct will bring an organization a lot further for a lot less cost. EMC Consulting can of course engage in this envisioning process taking a cradle-2-grave approach. We are not the only ones. There are other qualified vendors capable of delivering VBlock infrastructures. Keep in mind that it is really the processes and the organizational alignment that brings things to life!