In an earlier post related to targeted CPU power I talked about the emerging power of the x86 platform and its ability to handle even the most demanding workloads in a fully virtualized environment.
Recently, AMD announced that its long awaited 16-core (real cores - no hyper-threading) chip, codenamed "Interlagos" is finally available as the Opteron 6200. This is significant for the industry, not just in the speeds and feeds area, but also in really stepping up to the table in dealing with enterprise virtualization and Cloud needs in particular.
The ability to continuously add computing power within ever tighter thermal envelopes using the same processor socket, and wrap that in enough intelligence for dealing with service provider issues such as energy management is critical. This certainly follows the idea of upgrading that underlying hardware for a doubling in capability.
The fact that the major virtualization players already support AMD Opteron 6200 out of the box is great. This allows that forward planning exercise to be done when planning the computing base for your cloud.
For new entrants, those that held out on full blown cloud infrastructures, this very capable compute platform provides a means of entering rapidly and with power. The increasing core count is providing a market mechanism of reducing barriers to entry - hence potentially more competitors and potentially more choice!
What does this all Mean for Virtualization and the Cloud?
Virtualization provides the mechanism for decoupling the workload from the physical platform, allowing you as a potential service provider to slot in new infrastructure platforms/pieces. AMD hit the nail on the head here in terms of keeping the same socket as in previous generations. This provides a significant cost advantage in upgrading capacity.
Upgrading a Cloud server farm in this way provides the ability to virtualize more intensive workloads that were not previously candidates. Further, due to the higher number of workloads per physical server, fewer hosts are required. This does not necessarily mean that the old capacity is just ditched, but perhaps it can be sold off at a lower price point until costs are recouped. Once at that point those servers could be replaced with the next generation of compute!
This provides a perfect mechanism of implementing your own tick-tock upgrade cycle, in Intel parlance. One part that does not require rip-and-replace, simply upgrade the processor (tick). The other part of the server park can then benefit from the newer technologies that do require rip-and-replace (tock).
This has a massive underlying impact on financial management and upgrade cycles for hardware. In most organisations server replacement cycles are measured in years. It is simply too expensive in the traditional model of IT, without chargeback and significant CAPEX or long lease cycles, to replace servers on a shorter cycle to capture the benefits of newer technology. This results in skipping a generation, and implementing effectively a larger change needing greater effort than the step-wise implementation.
Processors with increasing core counts will be cropping up in all kinds of products, for example, storage servers needing more compute power to provide deduplication and encryption goodness in the same physical footprint. Enterprise array features being provided by software in compute intensive forms is another disruptive pattern emerging in the marketplace e.g. Pillar Axiom (now Oracle) used AMD to simply double its performanc with a chip upgrade. That is why cores still matter.
However, for cloud service providers, this tick-tock cyle allows them to have much shorter replacement cycles - say 6 months. Cloud providers need to be really good at calculating the business case in terms of the energy that is saved, the effort needed for the change, and downstream pricing models to stay competitive and relevant in a market that is rapidly reaching commodity based pricing.
Listening to clients and peers, there is the belief that it does not really matter which x86 processor is used. Well, I would counter that when looking into scale operations, then it is certainly relevant which processor is used. The technical specs are key indicators of underlying capability, and certainly needed in planning forward looking pricing and disruptive market maneuvers for capturing market share.
Why is this important for the CIO?
When examining the infrastructure that you are planning to support the organisations Cloud aspirations, take into account that there are multiple platforms (non-x86 also) and vendors to support the needs of the organisation. It is imperative that the CIO not get sucked into the hype of the Cloud. Clear logical thinking about what your organization needs is required.
This should be factored into the overall Corporate Cloud Strategy. Be ready to take advantages using tick-tock cycles. Use the habits of Cloud providers to underpin your own asset-capability management. Further, understand how to engineer your organization so that it can rapidly absorb new technology. This requires structure and processes to make this painless.
A holistic approach enables the organization to support its Unix, Linux, Windows workloads using the appropriate technology mix. Cloud is often mistaken for a mass standarization to one platform, usually x86. This should not be the case - differentiation based on standardized processes and exceptional infrastructure capability allow price/capability competitive advantage to be acheived.
Remember the reasoning behind server virtualization in the first place - consolidation and efficiency. That reasoning should also underping your own Corporate Cloud Strategy (CCS).