Virtualization Processes

A Resurgent SPARC platform for Enterprise Cloud Workloads (Part 2) - SPARC T5

Some time ago, I blogged about the resurgence of the SPARC platform. The then newly designed SPARC T4 was showing tremendous promise in its own write to be able to take up its former mantle of being an innovation leader running extreme workloads with the Solaris 11 operating system.

Indeed, it was used as the driving engine of the SPARC Supercluster for dealing with not just massive acceleration of Oracle database workloads using the Exadata Storage Cell technology, but the ability to combine firmware embedded near-zero overhead virtualization concepts for electrically separate logical domains, carving up the physical hardware, and Solaris zones which allow near-native "virtual machines" sharing an installed Solaris operating system.

Up to 128 virtual machines (zones) supported on a system - a vast improvement over the 20-30 one gets under VMware-like hypervisors typically!

This welcome addition to the wider Oracle engineered systems family allowed the missing parts of the datacenter to be consolidated - these being typically glossed over or totally skipped when virtualization with VMware-like hypervisors was discussed. Customers were aware that their mission critical workloads were not always able to run with an x86 platform which was then further reduced in performance using a hypervisor to support large data set manipulation.

Well the rumor mills have started as the run up to Oracle Openworld 2012 at the end of September. One of the interesting areas is the "possible" announcement of the SPARC T5 processor. This is interesting in its own right as we have steadily been seeing the SPARC T4 and now the T5 having ever greater embedded capability in silicon to drive database consolidation and indeed the entire WebLogic middleware stack together with high-end vertical applications such as SAP, EBusiness Suite, Siebel CRM and so on.

Speculating on what "rumors" and the Oracle SPARC public roadmap, I'd like to indicate where I see this new chip making inroads in those extreme cloud workload environments whilst maintaining the paradigm of continuous consolidation. This paradigm that I outlined in a blog in 2010 is still very relevant - the SPARC T5 providing alternative avenues than simply following the crowd on x86.

Questioning "Datacenter Wisdom"

The new SPARC T5 will have, according to the roadmap the following features and technologies included:

  • Increasing System-on-a-Chip (SOC) orientation providing ever more enhanced silicon accelerators for offloading tasks that software typically struggles with at cloud scale. This combines cores, memory controllers, I/O ports, accelerators and network interface controllers providing a very utilitarian design.
  • 16 cores from the T4's 8-core. This takes them right up to the top end in core terms.
  • 8 threads per core - giving 128 threads of execution per processor providing exceptional performance for threaded applications such as with Java and indeed the entire SOA environment
  • Core speeds of 3.6GHz providing exceptional single threaded performance as well as the intelligence to detect thread workloads dynamically (think chip level thread workload elasticity)
  • Move to 28nm from 40nm - continuous consolidation paradigm being applied at silicon level
  • Crossbar bandwidth of 1TB/s (twice that of the T4) providing exceptional straight line scaling for applications as well as supporting the glueless NUMA design of the T5
  • Move to PCIe Generation 3 and 1TB/s memory bandwidth using 1GHz DDR3 memory chips will start to provide the means of creating very large memory server configuration (think double-digit TB of RAM for all in-memory workload processing)
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • Database workload stacking becomes even more capable and effective than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

This in itself at the processor level is really impressive, but the features that are on the roadmap aligned to the T5 possibly are the real crown jewels:

  •  on-die crypto accelerators for encryption (RSA, DH, DSA, ECC, AES, DES,3DES, Camellia, Kasum) providing excellent performance through offloading. This is particularly relevant in multi-tenant Cloud based environments
  • on-die message digest and hashing accelerators (CRC32c, MD5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512) providing excellent security offloading. Again particularly relevant in multi-tenant environments
  • on-die accelerator for random number generation
  • PCIe Generation 3 opens the door to even faster Infiniband networking (56Gbps instead of the current 40Gbps - with active-active links being possible to drive at wire speed)
  • Hardware based compression which will seriously reduce the storage footprint of databases. This will provide further consolidation and optimization of database information architectures.
  • Columnar database acceleration and Oracle number acceleration will provide extremely fast access to structured information. Further, when combined with in-memory structures, the database will literally be roaring !

Indeed when we think that the Exadata Storage cells will also be enhanced to support new chip generations, flash density as well as other optimizations, the next SPARC Supercluster (which has the embedded Exadata storage cells) will literally be one of the best performing database platforms on the planet!

To ignore the new SPARC T5 (whenever it arrives) is to really miss a trick. The embedded technology provides true sticky competitive advantage to anyone that running a database workload or indeed multi-threaded applications. As a Java platform, middleware and SOA platform as well as vertical application platform, the enterprise can seriously benefit from this new innovation.

Why is this important for the CIO & CFO?

CIOs and CFOs are constantly being bombarded with messages from IT that x86 is the only way to go, that Linux is the only way to go, that VMware is the only way to go. As most CFOs will have noted by now:

  • Financially speaking - the x86 servers may have been cheaper per unit, but the number of units is so large to get the job done that any financial advantage that might have been there has evaporated!
  • Overall end-2-end costs for those services that the CIO/CFO signed off on are never really well calculated for the current environment.
  • Focused investment on those activities that support revenue streams and those technologies that will continue to do that for at least the next decade with capacity upgrades of course
  • There must be other ways of doing things that make life easier and more predictable

Well Engineered Systems with the new SPARC T5 represent a way for the CIO/CFO to be able to power those projects that need investment which in turn drive revenue and value. The ability to literally roll the SPARC SuperCluster or any other Engineered System is going to be instrumental in:

  • Shortening project cycles at the infrastructure level
    • don't lose 6 months on a critical ERP/CRM/Custom application project in provisioning hardware, getting unexpected billing for general infrastructure layers such as networking that have nothing to do with this project, IT trying to tune and assemble, getting stuck in multi-vendor contract and support negotiations etc.
    • That time can be literally worth millions - why lose that value?
  • Concentrate valuable and sparse investment strategies literally to the last square meter in the datacenter!
    • If that next project is a risk management platform, then IT should be able to give exactly to the last datacenter floor tile the resources that are needed for that one project alone and the cost
    • Project based or zero-budgetting will allow projects to come online faster, predictably, reuse of existing platforms dealing with the load as well as supporting continuous workload consolidation paradigms
    • Finance enterprise architecture projects that put in the enabling conditions to support faster turnaround for critical revenue focused/margin increasing project activity
Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business returns.

As a CIO it is important to recognize the value that Engineered Systems and the SPARC platform, as part of an overall datacenter landscape, bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

As Oracle and others proceed in acquiring or organically developing new capabilities in customer facing technologies, managing exabyte data sets it becomes strategically important to understand how that can be dealt with.

Hardware alone is not the only answer. Operating systems need to be able to deal with big thinking and big strategy as do applications and the hardware. By creating balanced designs that can then scale-out a consistent effective execution strategy can be managed at the CIO/CTO/CFO levels to ensure that business is not hindered but encouraged to the maximum through removing barriers that IT may well have propagated with the state of the art many years ago.

Engineered Systems enable and weaponize the datacenter to directly handle the real-time enterprise. High-end operating systems such as Solaris and the SPARC processor roadmap are dealing with the notions of having terabyte datasets, millions of execution threads and thousands of logical domains with hundreds of zones (virtual machines) each per purchased core.

Simply carving up a physical server's resources to make up for the deficiencies of operating system/application in dealing with workloads can't be an answer by itself. This is what is also fueling the Platform-as-a-Service strategies partly. How to get systems working cooperatively together to deal with more of the same workload (e.g. database access/web server content for millions of users) or indeed different workloads spread across systems transparently is the question!

High performance computing fields have been doing just this with stunning results albeit at extreme cost conditions and limited workloads. Engineered systems are facilitating this thinking at scale with relatively modest investment for the workloads being supported.

It is this big thinking from organizations such as Oracle and others, who are used to dealing with petabytes of data, and millions of concurrent users that can fulfill  requirements expressed by the CIO/CTO/CFO teams. If millions of users needing web/content/database/analytics/billing can be serviced per square meter of datacenter space - why not do it?

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Datacenter Wisdom - Engineered Systems Must be Doing Something right! (Part 1 - Storage Layer)

Looking back over the last 2 years or so, we can start to see an emerging pattern of acquisitions and general IT industry manouevering that would suggest customer demand and technological capability packaging for specific workloads are more in alignment than ever.

I wanted to to write a couple of blogs to capture this in the context of the datacenter and in wider Oracle engineered systems penetration.

I will start with the Storage Layer has that seems to have garnered tremendous changes in the last 6 months alone although the pattern was already carved out in the early Oracle Exadata release in April 2010 (nice blog on this from Kerry Osborne - Fun with Exadata v2) in its innovative bundling of commodity hardware with specialized software capabilities.

Questioning "Datacenter Wisdom"

As you may know Oracle's Exadata v2 represent a sophisticated blend of balanced components for the tasks undertaken by the Oracle Database whether it being used as a high-transaction OLTP or long running query intensive Datawarehouse. Technologies include:

  • Commodity x86 servers with large memory footprints or high core counts for database nodes
  • x86 servers / Oracle Enterprise Linux for Exadata storage servers
  • Combining simple server based storage in clusters to give enterprise storage array capabilities
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • SAS or SATA interfaced disks for high performance or high capacity
  • PCIe Flash cards
  • Database workload stacking as a more effective means than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

Binding this together is the Oracle 11gR2 enteprise database platform, Oracle RAC database cluster technology allowing multiple servers to work in parallel on the same database and the Exadata Storage Server (ESS) software supporting the enhancements to facilitate intelligent caching of SQL result sets, offloading of queries and storage indices. There is a great blog from Kevin Closson - Seven Fundamentals Everyone Should Know about Exadata that cover this in more detail.

Looking at the IT industry we see:

  • EMC/Isilon acquisition that marries multiple NAS server nodes to an Infiniband fabric for scale-out NAS - indicating that Infiniband has a significant role to play in binding loosely connected servers for massive scalability.
  • EMC/Data Domain+Spectralogic showing that tape is not in fact dead as many are predicting and that it remains an extremely low cost media for Petabyte storage.
  • Embed flash storage (SSD or PCIe based) into servers closer to the workload than simply going across the SAN/LAN wires to an enterprise storage array showing that local storage with flash across a distributed storage node fabric is infinitely more effective than SAN storage for enteprise workloads.
  • EMC/NetApp with intelligent flash usage rather than as replacement for spinning disk significantly enhances certain workloads as we see in EMC's VFCache implementation and NetApp's Intelligent Caching.
  • Monolithic SAN attached arrays moving towards modular scalable arrays supporting the approach taken by Oracle's Pillar Axiom which scales I/O, storage capacity and performance independently using smaller intelligent nodes. EMC is using VMAX engines, NetApp with its GX (Spinnaker) architecture, and even IBM is going that way.

All these trends, and it is not so important really in what chronological order they happened or that I took some examples from leaders in their fields, clearly indicate convergence of technological threads.

I often hear from clients that Exadata is too new, uses strange Infiniband bits and has no link to a SAN array. Well clearly the entire industry is moving that way. Customers are indicating with their voices what they would like to have - capability and simplicity for the workloads that drive their revenue.

Why is this important for the CIO?

CIOs are typically confronted with a range of technologies to solve a limited array of challenges. They are constantly asked by the business and more recently CFOs to make sure that they are:

  • using future proofed technologies,
  • simpler vendor management,
  • focus investment on those activities that support revenue streams,
  • align IT with the business!

Well Engineered Systems are exactly all that. Oracle literally went back to the drawing board and questioned why certain things were done in certain ways in the past and what direct benefit that provided clients.

Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.

Oracle, I believe, has at least a 2 year advantage in that they:

  • learnt from the early stages in the market,
  • fine-tuned their offerings,
  • aligned with support requirements of such dense capability blocks,
  • helped customers come to grips with such a cultural change
  • is continuing to add to its "magic sauce" and still engineering the best of commodity hardware to further increase the value add of Engineered Systems.

The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business challenges.

As a CIO it is important to recognise the value that Engineered Systems bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

Engineered Systems provide the ability for IT to transform itself providing directly relevant Business Services.

It is not a general purpose approach where the IT organisation can hope for transformation - Engineered Systems enable and weaponise the datacenter to directly fulfill  requirements expressed by the CIO team through intense constant dialogue with business leaders!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Cloud Transformatics - CIOs/CTOs/CFOs Challenging Industry "Wisdom"

Hi Folks. After a long absence from blogging attributed to a move of apartment, job and some grandchildren, I thought it was time to take up the proverbial pen.

One of the big changes that took place was a change of jobs - in my case moving from EMC to Oracle - Blue to Red literally. In making that move there was some substantial levels of orientation and technology to absorb.

Let's face it, it is not every day that you are expected to learn about all those high-end enterprise applications. This takes time. You probably know what I am talking about - assets absorbed from Sun, StorageTek and a host of other acquisitions based in the Cloud coupled with databases, big data, data architecture strategies and the list goes on.

One of things that did come up was to understand what enterprise and industry wisdom mean to people. At my previous employer, enterprise applications were big complex constructions using by association big complex storage arrays, with big complex SANs and big complex backup environments. If possible, this should all be wrapped in a big complex virtual infrastructure based on VMware, preferably using Cisco UCS server blades and switches. Fine. No problem - as there was also value to be derived for the client.

However, the nature of complex applications at EMC was at that time looking at things like large Microsoft Exchange deployments, or database environments supporting ERP, Microsoft SharePoint and indeed mainly looking at the storage ramifications of thousands of users hitting disk in parallel.

However, the enterprise is vastly more complex than that. The ability to interact and manage the user experience of customers through sophisticated CRM offerings, the ability to manage oceans of structured data using massively parallel databases and indeed prepare for new techs like big data is a daunting task.

It is no wonder that the world of x86 server virtualization only penetrated to a certain level of an organization. The industry wisdom of "it must be on x86", "must use horizontal pools of infrastructure resources", "must use consolidated centralized storage" and "must be virtualized in VMware or some other hypervisor to be Cloud ready" is not a sufficiently complete answer to tackle these themes.

That did not stop every Tom, Dick and Harry from trying to sell a hypervisor of some sort with all the associated paraphernalia. This in turn generated a spree of IT organizations trying to recreate what they already had but just delivered "slightly differently".

Let's face it folks, Cloud is not "new" in the technology sense - no matter how many times the marketing departments of certain organizations try to convince you. More accessible -yes, easier to use - yes, new - no!

It is no wonder then that it has taken the IT industry some time to grapple with what Cloud and this "industry wisdom" all means for them. This has led to some important questions arising -

  • "must it always be x86?",
  • "should I forget all the last 30 years of IT knowledge?",
  • "what is the role of IT in an organization?",
  • "Dare a different approach be used not based on x86 exclusively?",
  • "can a normal Unix be used in conjunction to a Linux/Windows on non-86?".
  • "can I get sustained competitive advantage be being only commodity x86/Linux based?"

With the current industry trends and wisdom, I believe that it is incumbent on CFOs, CIOs and indeed CTOs to take a little step back. CTOs have the chance to examine technology for 'fit for purpose' and innovation certainly, whilst the CIO needs to ensure its practical application for the good of the organization.

The recent trend of CIOs reporting to CFOs is an interesting change. The CFO represents the ability of an organization to challenge what it is doing and why it is doing this. Innovation and creative thinking driven by financial stewardship!

CFO Trigger to Innovate and Question "Industry Wisdom"

Rather than viewing CFO oversight as a burden, it may well be that this "get back to basics" is a blessing in disguise. IT organizations get entrenched in their ways. There are many reasons to continue doing some things as they are - but plenty that need to change in light of CFO oversight.

However, there are some things that simply need to have a much higher level of efficiency. I have been meeting many customers using the Oracle database, and am surprised to see how few of them really use all the features that are available.

Indeed, the IT organization says "thou be too expensive matey", and then goes off on a spending binge to bring in a complete new stack of VMware with SAN storage, switches, servers and what have you - whilst reinventing ITIL so that it fits VMware operations. Ummm. That is not necessarily cheaper.

Don't get me wrong. This is not a mistake - but it is based on a pattern of break it all up so that it fits on a small x86 server and then virtualize it - how else do you sell a VMware/chargeable hypervisor license?

Well there are some workloads that fill up the entire machine - databases supporting large ERP installations. They can be broken up - but certain features like live VM migration of a running DB instance to another machine don't make a lot of sense when the VM has hundreds of GB/TB storage and 1/2 a terabyte of RAM.

In this case, the reverse pattern - antipatterns -  is perhaps a better one - literally using machines in an aggregation to service the workload - spreading the DB instance across multiple machines - this already exists in Oracle RAC. This is not new - and works pretty well.

We see other patterns like this in the Google datacenters - where search workloads are "spread" across servers working in concert - wow sounds like a cluster!

The CFO can and indeed should work with CIO/CTOs to start questioning investment decisions. The one thing I saw with IT organizations using VMware/SANs/SANswitches/etc was that from a financial perspective something was not quite right:

  • IT Investment that was leveraged on the back of a genuine critical business project was spread around the datacenter infrastructure
  • Storage arrays were expanded to accommodate the project, new switches were acquired, new licenses purchased etc.
  • Everything was mixed up - to the point that IT could not say where the money went - and the  business could not verify that it had received what it actually wanted.
  • Billing and chargeback were either non-existent or so primitively applied that it failed to gain traction

This was all justified through the IT organization rejigging its numbers. Not a good sign! The strength of the original investment was diluted through spreading it around such that the business unit really needing those resources was effectively "short changed" and trying to simply "make do".

Why is this important for the CIO?

At a time of crisis and oversight, it is incumbent on the CIO to question the approach put on his/her table for transformation or rejuvenation projects.

Verify whether there is another approach. Verify the patterns that are emerging. Remember, the reason we had wholesale x86 virtualization was that the servers running their particular workloads were not using hardware to its maximum capability or IT had simply decided for isolation reasons to add more servers etc.

  • If servers are being used to their practical maximums - does the IT organization still need to invest in a chargeable hypervisor?
  • If isolation can be provided differently than by encapsulating an entire operating system stack (a VM) - does the IT organization need to still make virtual machines and acquire more licenses for a hypervisor and more resources for server/SAN etc?

x86 was also the commodity route - costs less - doesn't matter what the workload it - just get ton loads of them - we'll worry about the datacenter space/power we need later. Well mainframes do a hell of a lot of work - albeit complex to manage. Big iron Unix boxes were simpler to operate but also doing a lot of complex workloads in parallel.

  • If a larger server can intrinsically do the job of many smaller servers with high levels of reliability, isolation and performance - do we need to invest in many smaller servers? - do we need to get x86 only? - must it be Linux?

If we examine the world of tomorrow - the idea of holding structured information of some sort even if acquired from unstructured data sources as in the case of big data is still valid. Logical isolation is still needed and we do not need to throw out the last 3 decades of database innovation just because some marketing scheme is based on that.

The same and more applies for applications - value added apps are available to an increasingly mobile customer base demanding access 24x7 and from any device. The web and indeed technologies that encapsulate applications like Java are being used to address these needs. SOA architectures were developed to decompose applications to reusable components that could be recomposed to add more value and shorter time-2-value.

These are the real enterprise applications that drive value. That is where the investment should flow. Investment needs to be concentrated into those areas to make the dent in the business revenue that is needed. This implies a "vertical" architecture model of investment.

However, IT organizations need to manage large estates. They think "across" those estates - a "horizontal" pattern of activity. Good for IT - bad for concentrated vertical investment based on projects.

However, I believe that horizontal manageability can be achieved through management tools whilst allowing investment to be applied vertically for maximum results. The CIO can create the conditions where this is encouraged and create a level of transparency in datacenters and IT organizations that has hitherto been lacking.

This gradual transition to running IT as a business itself with the same level of fiduciary responsibility that the rest of the business is subjected to can and should be the driver for the recomposition of IT services and the new role that IT needs to play in ever tighter markets.

In the next series of blogs I will take some of these themes and see how that can be done from an architectural pattern and technology point of view for CIOs in the context of business driven projects.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

The Time Paradox of Virtualization in the Cloud

Imported from http://consultingblogs.emc.com/ published May 02 2010

 

The 1st of May as Labor Day here in Germany (and elsewhere) always reminds me of a fundamental paradox shift in the world of virtualization, namely that of time. Just as workers struggle to get better working conditions and amenable working hours, the world of virtualization has effectively campaigned to get 'shorter provisioning times', better 'hardware operating conditions', and the fundamental respect of their fellow 'datacenter processes'. This with a strong emphasis on the rights of virtual assets to get paid less for a fair days‘ worth of work Wink

The advent of the virtualization of operating systems onto a hypervisor based platform has created a dilemma of sorts for the operator/administrator as well as all the accumulated datacenter/IT process knowledge over the years. Virtual systems are no longer playing catch up to demands of their human masters personified by business/service owners, as in days long gone.

It used to be atypical for a business unit to put in a request for a new operating environment including hardware, software, operating systems etc. and be lucky if that was completed in under a month (including lead delivery times if hardware needed to be ordered), never mind the scheduling around human administrators. In stark counterpoint, in the virtual environment the same environment can be provisioned in minutes. (PS: If this is not the case, then you should be talking to EMC Infrastructure Consulting amongst others to get at least this functionalityCool)

Therein lies the issue. Demand elements, such as business units, know that things can be done in minutes. As a result, the patience from the business for delivering IT resources, that may take weeks, is wearing thin. One has only to look at some of the administrators by some of our clients to feel the pain. However, the humble IT service provider should not necessarily be the point of focus here.

In most large organizations that I have had the pleasure to be allowed to assist, one notices an accumulation of processes used to provide a structured pathway between the business and IT. The processes provide a logical bridge to span the traditional chasm of objectives between the two (although that is another topic for another day).

These processes have been jointly built up over time by both the business and IT – a partnership of shared approaches that provide the meeting place of shared goals. These processes are valid constructs, and complexity in processes typically reflects an increasingly complex operating environment for both partners with a high factored-in risk should things go actually go wrong.

In short, the more complex an organization, the more complex its processes for matching IT supply with Business demand. Even ITIL based processes, based on best practices share these fundamental characteristics. Indeed, through ITIL, it could be said that the processes have 'gone forth and multipliedEmbarrassed to such an extent that there is perhaps an over compensation for the weak service delivery of the past. Nonetheless, ITIL based process provide a consistent means of delivering quality service.

These very same processes also provided a 'time‘ buffer between business demand and the supplier providing the requested IT resources. This was partially mandated by both partners, and the business accepted this, although there is a constant downward pressure to do more, faster and at less cost.

Voilà, in comes Mr. Virtualization and makes everyone an offer they cannot refuseCool. The benefits for service provision and management in the virtual environment are legendary for some, but as part of EMC Consulting, we live this every day. It is real. It is amazing (when one things of the traditional approach of provisioning operating systems on physical hardware). Frankly the features for high availability, disaster recovery and indeed business continuity alone are very strong reasons for moving lock-stock-and-all barrels to this virtual world.

There has been talk of self-service portals as the means of bringing the demand closer to the IT supply. However, in large organizations this simply does not work. There is a reluctance to simply let all and sundry pay and provide their own resources – loss of control is cited as the main reason. There are others of course, and they are dependent on the underlying physical asset provisioning process – let’s face it folks, in a datacenter there are finite levels of power, space, cooling, and cabling Confused The reasons are pretty real for not letting things happen too quickly.

Nonetheless, it may well be time for the business and IT partners to get around a table. They should understand the reasons for implementing the myriad processes of the past, and determine if in the brave new world of virtualization, these can be reduced, eliminated, totally automated or whatever it takes to be able to get the fabled speed out of the virtual infrastructure.

EMC Infrastructure Consulting, as part of its drive to the clouds, focuses a lot on the base processes of an organization. We look at trying to make sense of the landscape and discuss with the business at all levels if the traditional way of doing things makes sense.

For all the best intentions in the world, there is a massive braking actions from the entrenched IT and business establishment to continue working with current processes. Hey, if it is not broken, then there is no need to fix it, right?

Processes are not broken, they are simply not "fit-for-purpose". Servicing thousands of IT requests from the business simply cannot be done if every request needs to go through a process that is not automated, not rationalized, and requires panels of staff to evaluate and approve. Clearly the level of risk does need to be mitigated in some fashion, but certainly slowing things down will not be the best solution to that.

So the next time you find yourself with the heroes of virtualization: administrators, 'service managers', 'business sponsors' who have seen the light and actively encourage the organization to move to virtual platforms only - please keep in mind that these very same people are the potential route to optimization at process level. As consultants, part of our role is to structure the optimization, help present that to the various stakeholders and assist in the process transformation allowing the organization to meet its own aspirations.

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.