Previous month:
October 2011
Next month:
December 2011

November 2011

Why Cores STILL Matter - Continuous Workload Consolidation

In an earlier post related to targeted CPU power I talked about the emerging power of the x86 platform and its ability to handle even the most demanding workloads in a fully virtualized environment.

Recently, AMD announced that its long awaited 16-core (real cores - no hyper-threading) chip, codenamed "Interlagos" is finally available as the Opteron 6200. This is significant for the industry, not just in the speeds and feeds area, but also in really stepping up to the table in dealing with enterprise virtualization and Cloud needs in particular.

The ability to continuously add computing power within ever tighter thermal envelopes using the same processor socket, and wrap that in enough intelligence for dealing with service provider issues such as energy management is critical. This certainly follows the idea of upgrading that underlying hardware for a doubling in capability.

The fact that the major virtualization players already support AMD Opteron 6200 out of the box is great. This allows that forward planning exercise to be done when planning the computing base for your cloud.

For new entrants, those that held out on full blown cloud infrastructures, this very capable compute platform provides a means of entering rapidly and with power. The increasing core count is providing a market mechanism of reducing barriers to entry - hence potentially more competitors and potentially more choice!

 

What does this all Mean for Virtualization and the Cloud?

A while back, I indicated in a blog - "Virtualization as a 'Continuous Workload Consolidation' Paradigm" that virtualization should be considered in terms of the ability to continously consolidate. This effectively means that starting virtualization efforts even at Cloud scales is great, but the journey does not stop there. The ability to exploit major technology changes as they disruptively enter the market is the true value of virtualization.

Virtualization provides the mechanism for decoupling the workload from the physical platform, allowing you as a potential service provider to slot in new infrastructure platforms/pieces. AMD hit the nail on the head here in terms of keeping the same socket as in previous generations. This provides a significant cost advantage in upgrading capacity.

Upgrading a Cloud server farm in this way provides the ability to virtualize more intensive workloads that were not previously candidates. Further, due to the higher number of workloads per physical server, fewer hosts are required. This does not necessarily mean that the old capacity is just ditched, but perhaps it can be sold off at a lower price point until costs are recouped. Once at that point those servers could be replaced with the next generation of compute!

This provides a perfect mechanism of implementing your own tick-tock upgrade cycle, in Intel parlance. One part that does not require rip-and-replace, simply upgrade the processor (tick). The other part of the server park can then benefit from the newer technologies that do require rip-and-replace (tock).

This has a massive underlying impact on financial management and upgrade cycles for hardware. In most organisations server replacement cycles are measured in years. It is simply too expensive in the traditional model of IT, without chargeback and significant CAPEX or long lease cycles, to replace servers on a shorter cycle to capture the benefits of newer technology. This results in skipping a generation, and implementing effectively a larger change needing greater effort than the step-wise implementation.

Processors with increasing core counts will be cropping up in all kinds of products, for example, storage servers needing more compute power to provide deduplication and encryption goodness in the same physical footprint. Enterprise array features being provided by software in compute intensive forms is another disruptive pattern emerging in the marketplace e.g. Pillar Axiom (now Oracle) used AMD to simply double its performanc with a chip upgrade. That is why cores still matter.

However, for cloud service providers, this tick-tock cyle allows them to have much shorter replacement cycles - say 6 months. Cloud providers need to be really good at calculating the business case in terms of the energy that is saved, the effort needed for the change, and downstream pricing models to stay competitive and relevant in a market that is rapidly reaching commodity based pricing.

Listening to clients and peers, there is the belief that it does not really matter which x86 processor is used. Well, I would counter that when looking into scale operations, then it is certainly relevant which processor is used. The technical specs are key indicators of underlying capability, and certainly needed in planning forward looking pricing and disruptive market maneuvers for capturing market share.

 

Why is this important for the CIO?

The CIO needs to balance a long term view of things in mind. This also includes the ability for influencing the bargaining power of suppliers/vendors in their organizations. The ability for a competitive product that provides choice to the Intel platforms is critical to choice and price.

When examining the infrastructure that you are planning to support the organisations Cloud aspirations, take into account that there are multiple platforms (non-x86 also) and vendors to support the needs of the organisation. It is imperative that the CIO not get sucked into the hype of the Cloud. Clear logical thinking about what your organization needs is required.

Those needs are best expressed through speaking with the business. Techies will tend to focus on the speeds and feeds. However, the functionality the business requires to win and retain market share, customer leadership and to be industry front-runners is really where the CIO will get meaningful input.

This should be factored into the overall Corporate Cloud Strategy. Be ready to take advantages using tick-tock cycles. Use the habits of Cloud providers to underpin your own asset-capability management. Further, understand how to engineer your organization so that it can rapidly absorb new technology. This requires structure and processes to make this painless.

A holistic approach enables the organization to support its Unix, Linux, Windows workloads using the appropriate technology mix. Cloud is often mistaken for a mass standarization to one platform, usually x86. This should not be the case - differentiation based on standardized processes and exceptional infrastructure capability allow price/capability competitive advantage to be acheived.

Remember the reasoning behind server virtualization in the first place - consolidation and efficiency. That reasoning should also underping your own Corporate Cloud Strategy (CCS).

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

A Resurgent SPARC platform for Enterprise Cloud Workloads

Fujitsu has just announced that they have taken the crown in Supercomputer performance breaking past the 10 petaflop barrier. That is over 10 quadrillion operations a second. Seriously fast.

Just when we thought that Intel/AMD and x86 would take over the worldWink this beauty came along. For those interested in the speeds and feeds of the Kei Supercomputer - 22,032 four-socket blade servers in 864 server racks with a total of 705,024 cores!

This is a Supercomputer with specific workload profiles running on there. However, looking at the scale of the infrastructure involved, we are basically looking at multiple large scale Internet Cloud providers literally in this construct.

Traditional Cloud providers may well find themselves with a new competitor, the HPC Supercomputer crowd. Supercomputer are expensive to run, but they have all the connectivity and datacenter facilities that one needs.

Clearly this is a departure from the Linux variants that are currently ruling the virtualization roost like VMware, Citrix with Xen, RedHat with KVM, Oracle VM with Xen (and their Virtual Iron acquisition - one of the largest Xen based Cloud providers). Now we also have Solaris back into the game with its own take on virtualization - Solaris Containers. All of this is probably more focused on enterprise workloads - think BIG DATA, think ERP/Supply Chain/CRM!

 

What does this all Mean for Virtualization and the Cloud?

Currently most thinking for Clouds centers around the marketing of the largest players in the market. Think Amazon, Google for public clouds, and then the extensive range of private cloud providers using underlying technologies based on x86 hypervisors.

Many of the reasons for this scale out strategy with virtualization was centered around getting higher utilization from hardware as well as gaining additional agility and resiliency features.

High end mainframes and high end Unix systems have had resiliency baked in for ages. However this came at a price!

The Solaris/SPARC union particularly within large supercomputer shops provides an additional player in the market for enterprise workloads that still need scale-up and scale-out in parallel. This is clearly not for running a Windows SQL server, or a Linux based web server.

However, massive web environments can be easily hosted on such a construct. Large intensive ERP systems could take benefit, providing near-realtime information and event-response capabilities. One could easily imagine a supercomputer shop providing the raw horsepower .

As an example, the recent floods in Thailand are causing a huge headache for disk drive shipments worldwide. Linking an ERP system with big data analytics regarding the risk to supply chains based on weather forecast information as well as actual current events might have allowed a realignment of deliveries from other factories. That simulation of weather and effect on risk patterns affecting supply can certainly be performed in such a supercomputer environment.

 

Why is this important for the CIO?

When thinking about the overall Corporate Cloud Strategy, bear in mind that one size does not fit all. x86 virtualization is not the only game in town. A holistic approach based on the workloads the organization currently has, their business criticality and their ability to shape/move/transform revenue is the key to the strategy.

An eclectic mix of technologies will still provide a level of efficiency to the organization that a simple infrastructure-as-a-service strategy can not hope to reach.

Simply sitting in a Cloud may not be enough for the business. Usable Cloud capacity when needed is the key. This provides real agility. Being able to outsource tasks of this magnitude and then bring the precious results in-house is the real competitive advantage.

Personally, I am not quite sure that enterprises are quite ready to source all their ICT needs from a Public Cloud Provider just yet. Data security issues, challenges of jurisdiction and data privacy concerns will see to that.

That being the case, it will still be necessary for CIO/CTOs to create the IT fabric needed for business IT agility and maintain the 'stickiness' of IT driven competitive advantage .

Keep a clear mind on the ultimate goals of a Cloud Strategy. Cost efficiency is important, but driving revenue and product innovation are even more critical. A multi-pronged Cloud strategy with a "fit-for-purpose" approach to infrastructure elements will pay in the long run.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Cloud Security Maneuvers - Governments taking Proactive Role

In a previous blog entitled VMworld 2011 - Practice Makes Perfect (Security), I discussed the notion of preparing actively for attack in cyberspace through readiness measures and mock maneuvers.

This is happening at the level of nations. ENISA in Cyber Atlantic 2011, shows how large groups/blocs of nations are working on not only increasing their capabilities, but practicing in concert to see how global threats can be prevented or isolated in cyberspace.

This is at least as intensive as a NATO exercise. Languages, cultures, varying capabilities, synchronization of Command & Control capabilities as well as reporting and management at national levels.

APTs (Advanced Persistent Threats) will be the target in this exercise. This is a current and relevant threat with credible measures needed urgently. APTs can be used by organized crime or state sponsored attacks to circumvent even the most secure installations - nuclear/military typically. It is critical that measures and controls are in place at a national level.

Hopefully they will also cover the very sensitive area of reporting to the press, organizations that are being targeted or potentially targeted as well as practical measures that everyday folk like you and I can implement quickly and easily. Remember security starts with people!

 

What does this all Mean for Virtualization and the Cloud?

Clouds span organizations, nations, borders and cultures. We need to think in equal if not greater terms when thinking about security. Security in one area does not guarantee the security of the entire cloud or the communities that they serve.

There is of course a fine line in skirting personal privacy rules, in place for very good reasons of personal liberty and democratic thinking, and protection of assets in the Cloud from malicious attacks or just plain stealing of intellectual property.

Governments should also not be excluded. It is equally important that an individual has privacy rights maintained without the threat of big brother from other states or indeed your own government. This is an area that every individual needs to be vigilant against. Controls within Government also need to be available to the individual should there be patent infringement without a court order authorizing surveillance. Even that needs to be double-checked!

This does of course also strengthen the case for private clouds, or at least closed community clouds. This provides another buffer perimeter to attack, and ensures the ability to fence off networks from outside unwanted intruders.

This involves security by design. These measures to be able to isolate Cloud elements as needed, and proactive event triggered responses to security will entail ever smarter tools! The ability to process massive data and web logs in near real-time will power the heart of Automated Cloud Security Response & Tracking.

 

Why is this important for the CIO?

Competitive advantage may not be the only reason for charting a hybrid course for your clouds. Fit for function micro-cloud capabilities (e.g. focused on only providing Database-aaS, or Middleware-aaS) will ensure best in class features, and will ensure that there is an island of Cloud capability with the required security measures within the overall Corporate Cloud Strategy.

General purpose cloud constructs to run standard workloads on x86 platforms will also have their own level of security. This may well be a different defense strategy involved than protecting key structured and unstructured data repositories.

The fact that nation states are working collaboratively for Cybersecurity, provides an ideal opportunity for CIOs to link into that capability. National Cyberdefense will have access to the latest greatest wildest threats through linking into vendor response systems (RSA, Symantec, Trend, Qualsys etc) who are able to gather data from the users of their respective solutions.

Further, the ability to liaise directly with the heads of global organizations providing briefing information, as well as joint public response measures with the media will also enable a "soft landing" effect on global equity markets based on their fear of the effect of a wide-spread cyber attack. I do feel that Government should also provide a level of funding for corporate cyber security to ease the burden. Time will tell on this one!

One size clouds can be dangerous in a world where one needs to design for systems failing or being exposed to insidious attack. Although silos in IT are not the preferred approach, the idea of clear fenced off Cloud areas focused on the type of data they are operating on and their business impact analysis ratings should be seriously on the CIO agenda.

Cost savings may well need to be re-channelled to address your concerns with security. Work with the CSO/CISO to get the funding for securing the business assets. Work with government to have access to greater resources and possibly funding.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Size Matters - Micro Clouds and Engineered Systems

In a number of blogs in the past I chart the emerging growth in capabilities of infrastructure components that most people take for granted. The reason for doing this is to continue to highlight that old design rules may well need to move out of the way to pave the way for the new.

Although Intel and AMD continue to release roadmaps for processors with baked-in virtualization in silicon, the entire market is moving towards scale out models to populate their Cloud infrastructures. The customers are voting with their wallets, and proprietary systems are gradually being pushed out.

Looking at the Dell site the other day, I saw the new Dell PowerEdge R815 equipped with AMD processors. It sports 48 cores within a 2U footprint. This is really incredible - and 8 more cores than Intel currentlyWink. They go on further to state that they have whitepapers that compare multiple 2U units from HP (DL 380 G7) and that they have more capability at lower operating costs.

These type of messages are sweeping the industry currently. However, this would indicate that scale up as a strategy is on the rise again - after all it is easier to manage a single physical server than two servers right?

If scale up is on the rise again, then other non-x86-64 are again coming to the foreground. With IBM Power7 (8 cores per processor and 4 threads per core) and Oracle SPARC T4 (8 cores per processor and 8 threads per core) these are series work engines that can take on very demanding tasks.

Clearly size does matter. There are certain workloads, large scale databases, multi-threaded web applications that can take full advantage of these systems. Even hypervisors can run on these platforms, just not the one most people think of in daily terms. This is great news - there is again choice and the scale out strategy has brought prices down for scale up systems!

 

What does this all Mean for Virtualization and the Cloud?

Current thinking on clouds is based around x86-64 virtualization, and a drive by the industry to standardize all workloads onto these platforms from Intel, AMD and increasingly also ARM. This is also due to capability and a whole load of high-end reliability features migrating downstream to commodity processors.
This also creates a worldwide dependency on these chip. Any supply chain disruptions can drastically affect world supply levels.

That being said, scale up systems also provide tremendous inherent capabilities. VMware and Co. clearly highlight this with strategies for over-provisioning resources within a single physical server unit. They are clearly limited by what can fit into a single physical x86-64 server currently as well as the maximum specifications of the hypervisor currently.

Scale-up suggests a means to create islands of capability within the overall computing landscape that can be served with cloud technologies. VMware introduced Database-aaS concepts for scale out, but it is equally valid to indicate that a large scale mainframe running DB2 or an Oracle Exadata system could equally well run those workloads as well as provide DBaaS features through Cloud enablement.

These micro-clouds form fine tuned engineered systems for specific workloads, typically database or data intensive loads. This compares and contrasts with general purpose cloud platforms running the Intel/AMD x86-64 processors and mainstream hypervisors such as from VMware, Citrix, and Hyper-V.

The eingeering in these systems tends to go further into the workload software stack than an integrated system up to and including specialized versions of database software that squeeze out the most of the hardware.

This can be compared to the difference between a Ferrari and a VW Golf. Both are cars, but wow what a difference in capability they have. Where this metaphor falls down is that in the IT world the OPEX can still be similar (on a per capability basis of comparison). That is definitely different when talking about cars and their maintenance costs!

 

Why is this important for the CIO?

Following the herd in all things can only yield a limited competitive advantage. There is differentiation in "how" things are run. Long-term when everyone has consolidated then the levels of performance for specific tasks will also level out across the field. Barriers to entry are lowered as is the buying power of the customer.

Use of micro-cloud capabilities will deliver differentiation. They have tremendous power for processing. Further, they allow an internal level of consolidation within an particular platform stack that can't really be acheived through simple server consolidation alone.

Mind you, constructs like the VCE vBlock will take you a long way along efficiency with the level of converged infrastructure in the units.

Thinking out of the box definitely will pay off in the long term. There are alternatives that can be explored, which encapsulate the IT block concepts, but are focused and tuned for a more limited but business critical workload consolidation than mainstream hypervisors. Oracle's Exa systems represents one of these approaches.

A great blog describing this vision can be found at the Oracle Blog site. This provides their view on these engineered constructs. Some great insight into another view of the cloud pertaining to the database world.

There will no doubt be other platforms and engineered systems focused at specific workloads that enhance your overall business capability.

Sustained competitive advantage through differentiation requires the ability to think differently than the crowd. Old values and norms leading to greater agility and flexibility in business can still be applied to IT. No one size fits all applies to the Cloud world. Dare to innovate and create your own product mix to support the business.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.