Previous month:
April 2010
Next month:
June 2010

May 2010

The Journey to the Cloud –The Physical Construct (IT’s the Small Things in Life that Really Matter)

Imported from http://consultingblogs.emc.com/ published May 30 2010

 

Earlier this week, on my way to visit a client on the matter of VMware View and the virtues of a virtual desktop as a managed service, I was afforded the opportunity to board a Lufthansa flight. Now this was not the first time by any means, but I thought I would pay particular attention this time. I wanted to know, after being on various airlines all around the world, what made the travelling experience unique on the plane itself. Surely if all the planes were the same type (Airbus or Boeing) with similar layout then what was the magic sauce?

Observing the actions of the in-flight crew, I noticed a couple of things:

 

  • Clear branding for uniforms and matching colors throughout the plane rather than unclear messy uniforms
  • Always look sovereign no matter the level of turbulence rather than haggard
  • Perform actions with care, so quietly closing the overhead lockers rather than slamming them
  • When pouring drinks for the guests, they were really careful to hold the glass at the bottom at all times to keep the drinking edge clean
  • Varied ability to cope with all demands, from small kids being ill to adults wanting another drink

This got me thinking (and yes my mother warned me not to do that often Wink). When we are talking about themes such as virtual desktops it is the finer points of the solution that actually matter. In this context the ability to run tens of thousands of virtual desktops concurrently and delivering them anywhere a suitable network was present through a variety of devices. The client naturally asked many questions related to the handling of such a wide variety of desktops, and how our ‘in-flight crew’ could possibly deal with this when they, the client, were struggling with their new ‘airplanes and crews’?

Well firstly the airplane that we were basing this discussion on is called a vBlock ( The New Kid on the Block , vBlock Infrastructure Solutions , Virtual Compute Environment – an insider’s take) the integrated offering from Cisco, EMC and VMware. Well continuing with the airplane metaphor, in the sense that like any thr 'plane' it has server blades, networking, storage and virtualization software. However through the integration efforts of these world class companies comes something that allows new levels of service to be offered - and indeed one of the first physical constructs of the Cloud .

The vBlock has many ‘small things’ integrated allowing it to service thousands of virtual desktops and virtual servers with the branding that makes this a sovereign ‘in-flight crew’ member and indeed airplane fleet member. For example:

 

  • The server blade (Cisco UCS B250 M2 Extended Memory Blade Server) supports the latest 8-core Nehalem processors providing massive amounts of thrust for those virtual machines sitting inside the vBlock plane in an effortless manner that one scarcely notices on take-off and landing
  • Each blade supports up to 384GB of DDR3 RAM. That is like the Airbus A380 – massive capacity to ship thousands of virtual machines, and be energy efficient at the same time (Green IT). I quote:
    • The big news for operators is that the A380 is earning hard dollars at the same time. Introducing this next-generation jetliner is saving customers millions in operating costs annually while creating thousands of extra seats on long-haul routes. With the lowest cost per seat and the lowest emissions per passenger of any large aircraft, the A380 provides a competitive edge
    • That could read ‘The big news for datacenter operators is that the A380 vBlock is earning hard dollars at the same time. Introducing this next-generation jetliner private cloud datacenter-solution-in-a-box is saving customers millions in operating costs annually while creating capacity for thousands of extra seats on long haul routes virtual machines. With the lowest cost per seat virtual machine and the lowest emissions per passenger virtual infrastructure of any large private cloud offering aircraft, the A380 vBlock provides a competitive edge
  • 40 Gbps of I/O per blade provides the ability for virtual machine passengers to be able to play football in the aisles or indeed to run all datacenter workloads with ease
  • vBlock come in 3 models with differing capacity, similar to Airbus A320, A340 and A380 – vBlock Model 0 / 1 / 2 with all the optional extras fitted for maximum comfort, performance and range to operate without interruption for years!
  • Best of all – if there is an issue you don’t need to call ‘Ghostbusters’. There is a single call number for support on any element within the integrated solution. No passing of the buck around, no vendor blaming – just some good ‘ole fashioned engineering and support excellence. Just like with Airbus (and Boeing too!)
  • Oh did I mention all the other stuff like consolidated cabling, FCoE to support Fibre Channel and Ethernet over the same wires, and full support for EFD (Enterprise Flash Disks) with some serious performance (here’s an article I read some time back, but I liked it - don't miss the amazing vendor flash dance ). There is so much more….Big Smile

Anyway, it is difficult for established IT shops to be able to see a new integration offering for what it is. There tends to be the ‘oh there are some servers, network, storage and software….yawn’ reaction from IT administrators mainly.

I can understand that as they are looking at hardware all day. But seriously, try the vBlock – you will be at least as surprised as I was. We have other customers that were similarly surprised, sometimes by a factor of 5+ in terms of performance and ease of handling.

EMC Consulting is helping customers to be able to openly evaluate this remarkable platform to get that 'Lufthansa quality back into flying' (or IT operations in this case). This is part of our commitment to accompany the Journey to the Cloud – this time as a physical construct not simply the idea. Through our engagement we bring this platform to life through the alignment with processes and organizational strategy.

Keep in mind that most of the consulting folks I work with come from application development, datacenter management, process specialists, IT administrators and architects. We are keenly aware of these roles and indeed try to reach out every day to our former colleagues to create awareness of the state-of-the-art (no - it frequently comes from other companies than simply EMC!)

IT should be seen as an enhancer for the business, a value multiplier and therefore it is appropriate that new technologies be evaluated with an open minded point of view. You never know – you may have the chance to be the IT Hero and save your company literally millions (of any currency)!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Planetary Cloud: The Rise of the Billion-Node Machine

Imported from http://consultingblogs.emc.com/ published May 22 2010

 

In a previous blog (Virtualization as ‘Continuous Workload Consolidation’ Paradigm ), I described a paradigm change whilst being on the journey to the private cloud in how one examines the very substance of virtualization. Not simply a one-off activity but a fundamental restructuring of IT operations toward a continuous improvement activity facilitated by every increasing capabilities of component infrastructure elements e.g. the processor, network and so on.

The processor is of course one of those big facilitators. I was speaking to some of my ‘virtualization and Cloud‘ colleagues the other day in Berlin (nice place to visit folksSmile) regarding just how big an influence processors are. Some of these guys are born philosophers and well, we wanted to explore the outer boundaries of where the current multi-core processors would/could take us.

We know already about some of the gigantic leaps in the abilities of processors (see blog link above) from Intel / AMD and many others in the market – but where can this go? EMC Consulting, as trusted advisors for organizations large and small, maintains a technology watch on this, and we discuss the cumulative effects of technology on the core business of these firms in the form of envisioning workshops. Indeed, we do this as a continuous activity for EMC itself.

Whilst in discussion over some good cold German beers (well we do need to keep the thoughts lubricated as well) I mentioned something I had read about some time ago regarding RAM Clouds (The Case for RAMClouds: Scalable High-Performance Storage Entirely in DRAM). Well this got us going regarding scale, and as one colleague quoted (Richard Branson of Virgin fame) ‘maybe size matters after all’.

We are reaching a cross roads in the industry as more processing cores are made available in smaller formats. Indeed Intel already dubs its 48-core wonder as the ‘single-chip cloud computer’. Well let’s take this further. As core count increases, we start to reach rack loads of traditional computers being represented on a single chip. The 48 core chip is effectively the equivalent of 24 dual processor servers (2 server racks’ worth) or 12 four processor servers (also more than a single rack’s worth of servers) in the traditional uni-core (not unicorn!) world.

The next step up may well be the ‘datacenter on a chip’. That is the entire CPU processing power of a datacenter represented on a single chip. The close proximity of such cores presents several magnitudes of advantage over the equivalent physical server constructs of the past, and indeed today. Having a 1000-core machine available is definitely a competitive advantage in many industries!

Putting a couple of these machines online, with all the automation and management abilities needed (see All Roads Lead to the Cloud - Cloud Automation), and a dash of virtualization source, and we have a veritable cloud. Indeed this private cloud could be described, due to its density and several thousand core processing ability within say a single square meter, as the first Planetary-IT-on-a-Chip.

Folks this is tremendous amounts of power. This would be like putting in the entire complex of Google datacenters around the world in literally a single server rack. That begs the question of whether such datacenters should be built as they are - I think that is a blog for another timeWink.

Imagine all those datacenters represented within this construct. The RAM Cloud takes shape around this, with several Petabytes worth of RAM available instantly. Whole chunks of the Internet, perhaps all of it, stored in memory! This is instant-on IT!

Planetary IT on this scale would dwarf the constructs we have today. This would make the so-called mega datacenters of today look like a ‘very large desktop PC’ in comparison. EMC Consulting as Information Consultants, need to think about the long-range effects of such massive amounts of power and the information ocean that is held there. I mean cooling such a construct is no longer a discussion of putting in some chilling units. We are talking about all of this immersed in liquid just to stop it from frying!

The effects on businesses can be described as disruptive at least. Imagine all those DBAs perpetually raising concerns about never having servers large enough. What are they going to talk about in the future when we have 1000 cores available and Petabytes of RAM all provisioned in seconds[:'(] (before you ask - yes I have been a DBA - and I said this too)

The infrastructure and information management technologies of today need to keep these challenges in mind. Not simply multi-tenancy ability, but also the guardian and stewardship roles associated with, dare I say, the information cultural heritage of mankind.

EMC (The EMC Heritage Trust Project), IBM (Vatican Library Automates for the 21st Century), Gates Foundation (Bill & Melinda Gates Foundation) and many other organizations are investing in that future. Planetary IT may well be the tool they needed to perform this on a scale previously undreamt of.

Indeed this may be the means of IT boldly reaching parts of the world where no IT has gone before (thanks Gene Roddenberry - Where no man has gone before – some truly astonishing articles on Wikipedia)

Planetary IT, not just Public Clouds, may well be the next big thing. The ability to get the IT resources of the planet working on problems such as genome sequencing, climate challenges, simulation for nuclear fusion, spaceship design amongst others with a billion-node system composed each of 1000-cores is like getting a brain the size of the entire Earth working for a fraction of a second to find potential solutions to our most challenging concerns of today. This affects everyone. This is the Rise of the Billion-Node Machine – the billion-node IT nexus serving the planet.

Anyway, coming back down to reality, the challenges of administering, metering usage, maintaining and operations of such a Planetary Cloud are similarly challenging. The tools, services and products that EMC & others are introducing into the marketplace take these huge scales as a backdrop for development (see The Diverse and Exploding Digital Universe ).

Efforts such as the SETI@Home screen saver that harness the spare processing power of millions of volunteers dispersed around the globe concurrently to look for sentient life in the universe, Folding@home doing something similar for protein folding to solve the challenges of diseases are paving the way for what it means to make millions of machines work on a problem.

Pioneers and frontrunners are necessary in society. Indeed, the push towards Cloud thinking is a means of pushing the frontiers of IT as we know it today.

On this blog and others of course, the ripple effects of this infrastructure and paradigm play are being felt. The challenges of programming and presenting ever increasing volumes of information are discussed here daily. How to get teams organized, which methodology to be able to use, discussions of agility, outsourcing, working with colleagues in far-flung parts of the world concurrently (open source Linux kernel development) are attempts through dialogue to get the maximum out of these changes.

This know-how is filtering through into society through the consulting efforts engaged with organizations around the world. It is helping to directly address the varied challenges organizations bring to our attention every day. Before you ask, we don’t always have the answer, but the greatest ideas EMC has are stimulated through robust debates with these very same organizations.

The robust ‘Star Wars / Star Trek’ like discussions are important to stimulate the thoughts of tomorrow. To provoke these thoughts and stimulate our greatest minds, to get us all thinking about a world where processing limitations are a thing of the past and where that can lead us – will drive our young minds in Academia, Business and Society at large to develop the planetary IT frameworks needed to harness this power.

The Journey to the Cloud, or in this case Planetary scale IT is important as a concept whilst the practical realities and challenges unfold. It may well be the step that stimulates the IT Renaissance and leads us to the stars!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Virtualization as ‘Continuous Workload Consolidation’ Paradigm

Imported from http://consultingblogs.emc.com/ published May 16 2010

 

At EMC Consulting we are leading the charge to the Private Cloud for our Clients. This is starting with full-scale server and datacenter virtualization. Most clients are initially interested in the consolidation abilities of this technology to reduce their physical server footprint. Projects are drawn up and concrete plans made. Troops assembled, and then the ‘storming of the physical datacenter beginsCool’ (‘Once more unto the breach dear friends, once more!’ – Shakespeare’s Henry V – V for virtualization of course)

Amidst all this action, it occurred to me that the task of deep-range planning for organizations regarding where virtualization is headed, perhaps take second place, if it occurs at all.

Virtualization is considered at the server level as the ability to decouple and abstract an operating system from the physical hardware it resides on; encapsulated in a virtual machine. Yes, and you’re quite correct, that is simply one definition, but the one most people are familiar with. Keywords are ‘decouple’, ‘abstract’ and ‘virtual’.

However, did we actually ponder what all this actually means for the concern itself and its strategic business thinking? Let’s take a slightly different view of server virtualization. Not simply the ability to consolidate workloads (VMware parlance), but as the transformation of workloads, such that they are being continuously consolidated.

Continuous workload consolidation (CWC – for this blog only) is increasingly driving the multi-core joy we're experiencing in the processor market. It is one of the largest drivers for consolidation at the level of the CPU. This is a paradigm shift occuring over the last 2-3 years.

I believe the current score in the ‘Soccer Cores Super League – Currently Shipping Division’ is 12:8 to AMDParty!!!, with their ‘Magny-Cours’. Mind you, Intel’s ‘Nehalem-EX’ with 8-cores is not really that far behind - if at all.

EMC Consulting in Germany maintains a technology watch for our clients to indicate possible technology inflection points for senior executives that would like to basically know ‘what EMC is doing about it and how this affects the client's competitive landscape and their business strategy decisions’.

Look at what is going on in the background when we are talking cores:

  • Intel’s ‘Teraflops Research Chip with 80 simple cores working in unison shown in 2007
  • The ‘single-chip cloud computer with its 48 complex cores shown in 2009
  • There are others in the GPU market doing similar things, for example, chip designer ClearSpeed with 96-cores in the HPC market

Clearly all the other components of systems need to follow in leaps and bounds. We have solid state disks offering hundreds of thousands of I/Os per second, we have 10GigE, with 100GigE looming, and various Infiniband offerings providing even higher aggregate throughput as the High Performance Computing crowd has amply proven.

So lots of good stuff coming, and the IT Planners are rubbing their hands with glee! However, the CIO/CTO needs to think beyond simply some ‘tasty gear’.

The idea of CWC implies that rationalization of application/infrastructure architectures, desktop/server sprawl and operational efficiency should already be on the agenda today. This rationalization already provides the means to potentially reduce the physical real-estate not simply by a 10:1 or 20:1 ratio, but more like 100:1 or more (100 physical servers consolidated to a single physical server). Rationalization extends to the organization allowing rapid change to be implemented and value extracted accordingly (Business and IT agility are needed).

Doing some quick maths as an example: a current datacenter hosting 1000 physical machines, say 2U height servers in 42U racks, requires around 65 racks (18 servers per rack, although cooling may imply conservatively 16 servers per rack plus space for patch panels etc).

This does not include the racks to host physical network switches, SAN switches, and the 8,000 cables (say 8 cables per server, production/ management/ SAN/ iSCIS networks, KVM cables, and the list goes on…oops, forgot the patch panel cabling to the central switch complex – shall we double the number of cables?).

That is enormous! (by the way if you are struggling with that – EMC Consulting can help youWink).

The CWC concept with the technology advances coming allow this to be potentially reduced to a handful of physical servers (I mean like say 5-10 physical servers). However, this massive density of virtual machines on these servers requires the rationalization transformation mentioned earlier on the Cxx agenda.

That rationalization ensures that the organization is essentially ‘fit’ to use these advances as they become available. It ensures that current investments are focused to allow simple server replacement decisions to be made to provide that technology and move all the workloads to that new platform in seconds. This already provides further consolidation (we are running more workloads on less equipment rightGeeked)

These are big decisions we are talking about folks. Should we build a new datacenter now or not? Should we make do with what we have? What level of service can I offer with the resources in my possession? Do I need to offshore my IT? Do the current organizational structures and processes allow me to get this competitive advantage? Serious stuff.

When we start to factor in desktops and desktop applications this massively favours centralization of compute resource. However, the corresponding network technology, always-on, needs to be built out to ensure access to this brave new virtualization world. This would indicate the move to 10GigE, IPv6 and possibly fiber optics throughout should already form a large part of the infrastructure build-out that is required.

This is going to have many knock on affects. If we currently don’t know where our old physical server workload is actually running due to dynamic workload shifting (VMware DRS/DPM if we are watching the green IT profile) in the datacenter, then this virtualization of physical people presence through advanced networking may mean that a large part of the workforce operates from the home. After all, they’re just a phone call away. Indeed, with video conferencing advances, they can be easily contacted, information/workspaces shared/exchanged, and we can attach a face to the remote voice.

That raises a lot of questions - food for thought.

Do we need to have large office blocks anymore? Should an organization invest in real-estate on current scales? Will this change inner city landscapes and architectures? What about the distribution of small businesses and restaurants typically based in close proximity to these large commercial centers where thousands of employees operate? What is going to happen to the road congestion, indeed road and transport design? What about all those Tesco's stores built in the London financial districtsStick out tongue

Folks, there are many elements of our normal lives that are ultimately affected by the current server virtualization wave. Some holistic thought early on will allow pioneers of industry, creative geniuses, artists, urban designers, and hey, even you & I Wink to be able to heavily influence the ‘digital workplace’ of the not-too-distant future!!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


The Journey to the Private Cloud: The Business Buy-in

Imported from http://consultingblogs.emc.com/ published May 10 2010

 

In two previous blogs (All Roads Lead to the Cloud - Cloud Automation and The Time Paradox of Virtualization in the Cloud), I described an alternate route to the private cloud through automation and the subtle effect of the time parameter on the IT organizational processes. Most of these routes are based around solving the technical challenges that persist in making the infrastructure suitable for Private Cloud gymnastics.

However, in many of the discussions that we at EMC Consulting are having with clients and their senior management, there is another theme actually requiring considerable effort and tenacity to resolve, namely the buy-in of the business.

It's easy to think the benefits of the private cloud are so considerable and overwhelming it is effectively a 'no-brainer'. Everyone should be able to instantly understand the 'cloud is good'. You need that cloudBig Smile

Well suprise suprise, the business tends to have a different understanding and perspective on the private cloud theme. The IT organisation clearly understands the value of the cloud as it pertains to 'their IT business'. However, their persists an understanding gap between financial investment and the core business benefits that can be extracted. Simply saying that provisioning will be quicker, or that the service now exists in the cloud are not really convincing enough for a financial department that is looking at the P&L bottom line for the business.

Think about it. In a green field site (or a lab for that matter) a private cloud approach can be generated and implemented with relatively little resistance. However, in large organisations already having a considerable investment in applications and services directly linked to business offerings, the idea of a migration to a virtual infrastructure (when everything is already working) is not a pleasant thought. Indeed it is the basis for many transformation programmes stalling and ultimately incurring long delays.

Essentially, every business element is impacted, and the level of controlled change with the resulting impact/risk analysis would make most business managers a little cagey. From their perspective we are dealing with the unknown, the ‘what-if’ scenario of a physical-to-virtual (P2V) migration that ‘might’ go wrong and impair their ability to conduct business. These are certainly valid considerations.

At EMC Consulting, one of the areas where we focus on with clients is the idea that this level of transformation, which we affectionately call the ‘journey to the private cloud’ requires a level of agility in an organizations’ communication and process ability that hitherto did not exist in an explicit form. Most organizations understandably have processes and communication lines aligned to deal with business as usual.

In the same way that we build private high speed networks to facilitate the transfer of time critical information, an organization would need to build a parallel communications vehicle that is suited for large scale transformations. The purpose of this transformation network is to be able to share and project the vision of the transformation in concrete terms to business/service managers, months in advance of actual activity. This effectively warns an organization that a substantial series of changes are coming their way with the common goal of implementing that shared vision.

The shared vision should already have been undersigned at the highest echelons ensuring board level mandate and support for the ‘Private Cloud’ transformation program. It is here initially that the concrete benefits to the overall business are expressed in outline form as pertaining to the entire firm. Further discussion on utilizing the transformation network will allow more concrete goals to be expressed in the context of specific business units.

All this leads to a more detailed elaboration of the classic ‘business case’, except that this time it is a business case and not a simple reflection of cost savings/avoidance/IT benefits supposedly good for the business.

The transformation network does not simply facilitate communication; it is also a mechanism of smoothening the journey and effectively initiates the upfront work of reducing the time-to-benefits. How so? Well the business realizes something significant is coming. They are ‘mentally prepared’. Further, the usual chain of command for allowing changes to be accepted gets a ‘short-cut’ fast-track route. All processes within that business unit become lean enough to let the myriad changes associated with cloud transformation to be accelerated. The body of a firm is preparing for some serious exertion, and in the background the 'adrenaline' starts to be generated and pumped into the organization.

IT change processes also benefit from this leanness. This allows the usual corporate ponderance to be circumvented and indeed contributes to the healthy transformation of processes that befit a cloud environment – namely large numbers of service requests and changes together with an extremely rapid turnaround.

A high-speed train needs a very structured railway network with tracks laid along straight flat ground through urban conurbations where the populace has effectively understood the need for the noisy and potentially disruptive train to pass through their back yards - in order to get to those high-speeds. Well that is an appropriate metaphor for the private cloud.

The populace is the business, and the track laying is the IT and business jointly taking an active part in ensuring positive change can flow through their organization with business benefits embarking and disembarking at each of the business unit 'stations' along the way – on time and at very effective travel ratesWink.

Now the train – you guessed it – is the transformation program; the Journey to the Private Cloud!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

The Time Paradox of Virtualization in the Cloud

Imported from http://consultingblogs.emc.com/ published May 02 2010

 

The 1st of May as Labor Day here in Germany (and elsewhere) always reminds me of a fundamental paradox shift in the world of virtualization, namely that of time. Just as workers struggle to get better working conditions and amenable working hours, the world of virtualization has effectively campaigned to get 'shorter provisioning times', better 'hardware operating conditions', and the fundamental respect of their fellow 'datacenter processes'. This with a strong emphasis on the rights of virtual assets to get paid less for a fair days‘ worth of work Wink

The advent of the virtualization of operating systems onto a hypervisor based platform has created a dilemma of sorts for the operator/administrator as well as all the accumulated datacenter/IT process knowledge over the years. Virtual systems are no longer playing catch up to demands of their human masters personified by business/service owners, as in days long gone.

It used to be atypical for a business unit to put in a request for a new operating environment including hardware, software, operating systems etc. and be lucky if that was completed in under a month (including lead delivery times if hardware needed to be ordered), never mind the scheduling around human administrators. In stark counterpoint, in the virtual environment the same environment can be provisioned in minutes. (PS: If this is not the case, then you should be talking to EMC Infrastructure Consulting amongst others to get at least this functionalityCool)

Therein lies the issue. Demand elements, such as business units, know that things can be done in minutes. As a result, the patience from the business for delivering IT resources, that may take weeks, is wearing thin. One has only to look at some of the administrators by some of our clients to feel the pain. However, the humble IT service provider should not necessarily be the point of focus here.

In most large organizations that I have had the pleasure to be allowed to assist, one notices an accumulation of processes used to provide a structured pathway between the business and IT. The processes provide a logical bridge to span the traditional chasm of objectives between the two (although that is another topic for another day).

These processes have been jointly built up over time by both the business and IT – a partnership of shared approaches that provide the meeting place of shared goals. These processes are valid constructs, and complexity in processes typically reflects an increasingly complex operating environment for both partners with a high factored-in risk should things go actually go wrong.

In short, the more complex an organization, the more complex its processes for matching IT supply with Business demand. Even ITIL based processes, based on best practices share these fundamental characteristics. Indeed, through ITIL, it could be said that the processes have 'gone forth and multipliedEmbarrassed to such an extent that there is perhaps an over compensation for the weak service delivery of the past. Nonetheless, ITIL based process provide a consistent means of delivering quality service.

These very same processes also provided a 'time‘ buffer between business demand and the supplier providing the requested IT resources. This was partially mandated by both partners, and the business accepted this, although there is a constant downward pressure to do more, faster and at less cost.

Voilà, in comes Mr. Virtualization and makes everyone an offer they cannot refuseCool. The benefits for service provision and management in the virtual environment are legendary for some, but as part of EMC Consulting, we live this every day. It is real. It is amazing (when one things of the traditional approach of provisioning operating systems on physical hardware). Frankly the features for high availability, disaster recovery and indeed business continuity alone are very strong reasons for moving lock-stock-and-all barrels to this virtual world.

There has been talk of self-service portals as the means of bringing the demand closer to the IT supply. However, in large organizations this simply does not work. There is a reluctance to simply let all and sundry pay and provide their own resources – loss of control is cited as the main reason. There are others of course, and they are dependent on the underlying physical asset provisioning process – let’s face it folks, in a datacenter there are finite levels of power, space, cooling, and cabling Confused The reasons are pretty real for not letting things happen too quickly.

Nonetheless, it may well be time for the business and IT partners to get around a table. They should understand the reasons for implementing the myriad processes of the past, and determine if in the brave new world of virtualization, these can be reduced, eliminated, totally automated or whatever it takes to be able to get the fabled speed out of the virtual infrastructure.

EMC Infrastructure Consulting, as part of its drive to the clouds, focuses a lot on the base processes of an organization. We look at trying to make sense of the landscape and discuss with the business at all levels if the traditional way of doing things makes sense.

For all the best intentions in the world, there is a massive braking actions from the entrenched IT and business establishment to continue working with current processes. Hey, if it is not broken, then there is no need to fix it, right?

Processes are not broken, they are simply not "fit-for-purpose". Servicing thousands of IT requests from the business simply cannot be done if every request needs to go through a process that is not automated, not rationalized, and requires panels of staff to evaluate and approve. Clearly the level of risk does need to be mitigated in some fashion, but certainly slowing things down will not be the best solution to that.

So the next time you find yourself with the heroes of virtualization: administrators, 'service managers', 'business sponsors' who have seen the light and actively encourage the organization to move to virtual platforms only - please keep in mind that these very same people are the potential route to optimization at process level. As consultants, part of our role is to structure the optimization, help present that to the various stakeholders and assist in the process transformation allowing the organization to meet its own aspirations.

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.