VMware

Competition Heating up for VMware - Can one still char$$ge for a Hypervisor?

In all areas of the market, we are beginning to see hypervisors cropping up. Each have various pros and cons. However, the cumulative effect on the virtualization market is important to note for strategists. Even the new mobile virtualization platform of VMware is being directly attacked by Xen/KVM on ARM.

Although VMware is constantly touted as being at least somewhere between 2-4 years ahead of the market. I don't agree with that point of view - as time and competitor movements in the wider market are not linear in their behaviour.

Indeed even the stronghold of the Intel-VMware platform is being gradually encroached upon by ARM and AMD. They are building similar extensions in silicon as Intel, with the added advantage of open source hypervisors gaining traction in a development community that is many times the size of VMware's potentially.

It is a sobering thought that even VMware originally started out from academia and that the hypervisor itself is based on Linux. Academia is now firmly behind open source hypervisors. Once the likes of Red Hat, Citrix and Oracle start weighing in with their development expertise, and in Oracle's case with the added advantage of tuned hardware systems, it will be interesting to see if VMware is still the only game in town.

 

Why is this important for the CIO?

CIO's balancing the long term view against short term tactical needs, need to understand that the when one is looking at becoming Cloud capable, that VMware is not the only solution. The idea of "good enough" should be a strong motivator for product and solution selection.

Indeed, the CIO and team, would be well advised to verify if the savings they are expecting really will be delivered by a near-commodity hypervisor that has strong license costs versus the organisational need to be cost efficient, and tap into the marketing value of the cloud.

Interestingly, in a more holistic sense, the fact that open source hypervisors are continuing their trend in being available on every imaginable hardware platform, including mobile, is in itself a strategic factor. New challengers to Intel and AMD are cropping up, and indeed platforms that had faded into the background over 2009/2010 are surging ahead in 2011-2012 for high end enterprise workloads - as mentioned in the blog "A Resurgent SPARC Platform for Enterprise Cloud Workloads ".

The Corporate Cloud Strategy will certainly benefit from this type of thinking. It will highlight potential alternatives. Depending on the time horizon that the strategy is valid for, "good enough" may well be enough to realize the competitive advantage that is being targeted.

Certainly learning to adapt your organization for the realities of the cloud world requires time. Innovation built upon such enabling platforms requires not just a focus on the infrastructure but the application development environment and ultimately the software services that are consumed.

Remember it is the applications that deliver advantage. The quicker they are brought to market, and on platforms that allow cost efficiencies and agility, the better for the organization concerned. This in turn is leading to a stronger focus on appliances and engineered systems for enterprise virtualization....that's for another blog I thinkWink.

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

VMworld 2011 - 101 for Newbies

The VMworld 2011 conference in Las Vegas in September was a really cool event. Lots of new stuff to see, lots of customers to talk to and of course lots of presentations!

There are plenty of blogs out there that covered the conference and particular technologies and sessions. However, I found the VMworld Newbies material somewhat sparse.

This being my first VMworld attendance, there were a couple of points that I wanted to spell out for future VMworld Newbies:

  1. Being 40°C in Vegas – take plenty of water with you at all times!
  2. Having taken lots of water, it needs eventually to leave the body – the call of nature. Apply some basic business intelligence to locate a toilet:
    1. Know where all the toilets in the conference center are
    2. Look at sessions and know on which floors there are sparse sessions
    3. Run like hell to that sparsely populated floor – relief!
  3. There are packets of nuts/crisps/fruit on the floors at regular intervals – grab some as reserves that can be consumed in the session presentations
    1. You will need this – the brain is working overtime after all to suck all that information in
  4. In the session rooms:
    1. Find a seat – conveniently where you can read from the screens as well as be able to leave innocuously (aisle is best)
    2. The rooms are cold, you are not moving too much -> be prepared for the call of nature
  5. Session selection - this is the big one!
    1. VMware logic says select a session at any particular time and you will be so happy that you will stay in the room for the full duration of the session – I don’t think so!
    2. My plan
      1. Select primary session of interest if still room available (most popular sessions were rerun in any case)
      2. Select second and third sessions per session window paying particular attention to the locations (which room and which floor)
      3. Turn up to the primary session -> if rubbish/boring/irrelevant/etc then out you go to the second selected session
      4. If the primary was too full, go to the second session (stay if good session) otherwise out you go and to the primary session -> there is always room to attend
    3. The third session is if you are following a strategy of sampling the material in each session
      1. The first 10 minutes of a session are used for proclaiming the disclaimer that is mandatory to show on each slide deck
      2. The last 10 minutes of a session are typically useless (although in some sessions there was good question-answer at end – but rare)
      3. That just leaves the middle bit of possible interest
      4. The third session lets you basically attend 3 different sessions to get the feel and orientation of the session
      5. Result -> attend 3 sessions for the price of one!


Regarding the sessions. It is really not easy delivering these - my hat off to every single one of the presenters! However, based on the varying level of focus and expertise in the subject material, it is simply necessary to dive out (or in) based on your own personal evaluation of the material.

Knowing the floor layout and where common session rooms (for your selection) really saves some time. There are other tips but these ones I used every single day to my personal benefit. The evening review of the material helped take it in.

Remember it is a fun action-packed technology-infused event over many days and it pays to stay fit enough to see it through. I saw many attendees fading at the end – although that may well have been because of the event partiesWink

 

So here comes VMworld 2012 already knocking on the door!....

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

VMware VCAP-DCA: Manage for Performance Course

Imported from http://consultingblogs.emc.com/ published Jul 04 2011

A couple of weeks back, I attended the VMware vSphere: Manage for Performance course. This is a recommended course for VCAP-DCA certification – the full title is (deep breath) – ‘VMware Certified Advanced Professional - Datacenter Certified Administrator’.

I just wanted to give a heads-up to all those others that have spent the last year focusing on and sharing experience in relation to the VCAP certification process to show their great prowess in all things VMwareWink

I was somewhat skeptical of a performance oriented course that lasted only 3 days. Typically in these new courses there seems to be a lot of “discussion”. However, for that really to succeed, the course attendees need to have a level of relatively intensive medium-large scale virtualization experience. It is critical that the trainer is someone that is engaged enough and motivated enough to perform the required level setting in the group.

Our trainer really managed to do that – no easy task, believe me! However, the guys at QA-IQ know their stuff and bring it across in a really interactive style (a big thanks to the trainer Nel Reinhart! who also shared great insight into rugby – made me finally watch the Clint Eastwood/Morgan Freeman/Matt Damon Invictus film – excellent by the way!).

Well to the course itself. There was actually a fair amount of hands-on for a change. The VCAP-DCD was just a tad heavy in case study and discussion material. These courses are expensive, and I believe that every attendee wants to get the most out of the material and be able to use it directly in their daily work environment.

What I personally liked about this VCAP-DCA Performance course was that for one of the first times, the terminology has been standardized – particularly in the areas of memory, virtual memory and all things paging related. This was a major headache as when one speaks with administrators at client sites or even VMware personnel for that matter. There is a very large discrepancy between the words used, what was intended and the actual definitions of words used.

There were also some clearer guidelines regarding the word “overhead” in creating virtual machines and best practices that were actually pretty good.

One of the areas that we had an intensive discussion in the course about was regarding technology convergence vectors and their effect on “best practices”. As I principally work in the high-end large scale environments, I get to see a lot of the cutting edge without getting too bogged down in the “techno-babble” that virtualization discussions sometimes lead to. You know what I mean – are 2x vCPU better than 1x vCPU, more memory or less, which guest operating system to use….Crying

In the course we discussed some of the areas that are driving deep virtualization and continuous consolidation. We discussed the number of VMs that an ESX server can host, and the best practices for that. However, on the technology side we are seeing massive increases in say network bandwidth – 10GbE in the mainstream, and 40/100GbE technologies already out there at the bleeding edge. The old notion of there is not enough network bandwidth is starting to disappear – leading to revisions of the consolidation ratios practically achieved by customers!

Naturally, in actual exam and training course material this needs to be matched with the exam requirements – you are there to pass the exams after allYes. However, I think that training courses are sometimes a more intimate circle where new technology, best practices in the market, challenges, issues, and things that plain “just don’t work” can be discussed in confidence without revealing any customer names (very important) or secret-competitive “things”.

That type of discussion may well be the real value of such courses – requiring techies to be receptive to that type of information - as increasingly competitive pressures are resulting in fewer customer references, less information sharing in an industry, and competitive “virtualization” advantage being fiercely guarded!

From an EMC Consulting point of view, it is great to be able to help educate-up administrators on the rationale of virtualization beyond simple consolidation. The aspirations of their businesses are discussed, as well as where they see areas of improvement from a practical administrative point for the virtualization technologies.

Correspondingly, the career administrators in such courses do a great job of educating consultants by highlighting how and why things go wrong on the ground as well as the relationship management improvements needed to ensure distillation of the precious knowledge these administrators have and not simply dismissing their ideas. Additionally, always great to hear how they are all uniquely addressing bulk administrative duties and automation in the context of their own unique datacenter eco-systems

In any case, to all those VCAP-DCA’ers, keep at it and good luck when you do your exam! Right – back to the training materials……

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Exchange Server 2010 – Keep In-house or BPOS?

Imported from http://consultingblogs.emc.com/ published May 28 2011

Exchange Server 2010 (E2K10) continues to be a baffling phenomenon. We have a raft of features to make E2K10 easier to manage when running at large scale. Integration with the Microsoft ecosystem (ADS, SharePoint, Outlook/Office) is of course excellent. We even hear with the use of the new DAG (replicated mailbox databases) that backups are a thing of the past – don’t need to backup E2K10 apparently.

I have worked with all versions of Exchange, including when it was still MS Mail 3.x. During the years I have been a big fan of this messaging system – mainly due to its integration ability and use of database technology (well JET actually) to provide many features that were novel at the time of introduction. E2K10 is no exception. However, with the rise of the Cloud, and services such as Microsoft’s Business Productivity Online Suite - BPOS or other hosted Exchange service providers, the messaging is becoming increasingly unclear.

What do I mean with that last statement? Well, take a look at the short list of things I hear regularly with clients:

  • Don’t need to backup E2K10 – Microsoft told me so
  • Don’t need fast disks anymore – Microsoft told me so
  • Messaging is not core to my business – will outsource it
  • What added value does the internal IT provide that the Cloud offerings of Exchange cannot provide?
  • Cheaper to host Exchange mailboxes with Microsoft BPOS or another Service Provider

Well having seen in many large organizations what happens when the eMail service is not available, I would argue that messaging services are critical. Indeed, the more integration one has with other upstream applications that utilize eMail, the greater the dependency on a reliable environment. This would indicate that messaging services are core to the business, and indeed may be tightly linked to new service offerings.

The idea of not backing up data, while certainly very attractive, is a little off the mark. There are other reasons for backing up data than simply to cover the “in case” scenario, including compliance, single-item recovery, litigation amongst others that require some idea of preservation of historic point-in-time copies of the messaging environment.

However, the last points regarding cost, and being more effective to host Exchange with Microsoft directly. Well this is really a bit of a sensitive topic for most administrators and indeed organizations. One of the reasons that Exchange is expensive is that it simply could not in an easy fashion cover the needs of the organization in terms of ease of administration, scalability, infrastructure needs, reliability and indeed cost. It does seem to me that Microsoft itself may well be partially responsible for the “high cost” of messaging services.

Why is this Relevant for Virtualization and the Cloud?

Well, many of the cost elements regarding Exchange environments in particular related to the enormous number of dedicated servers that were required to host the various Exchange server roles. The I/O profile of the messaging service was also not very conducive to using anything less than high-end disks in performance oriented RAID groups.

Administration for bulk activities such as moving mailboxes, renaming servers/organizations, backup/restore and virus scanning were not particularly effective to say the least.

Don’t get me wrong, Exchange 2010 is a massive improvement over previous versions. I would put it akin to the change from Exchange 5.5 to Exchange 2000. The new PowerShell enhancements are great, and finally we are getting better I/O utilization allowing us to finally use more cost-effective storage.

Where it starts to all go wrong is when Microsoft starts to lay down support rules or gives out advice that goes against the prevailing wisdom of seasoned administrators:

  • Virtualization on Hyper-V is supported, whilst other hypervisors need to be in their Server Virtualization Validation Program (SVVP)
  • Certain advanced functions such as snapshots, mixing of Exchange server roles in a VM and certain vCPU:pCPU ratios are note supported
  • Low performance disks are fine for messaging functions – what about backup/restore/AV scanning/ indexing etc?
  • Still no flexible licensing options that allow for “pay-as-you-use” or allows cost savings from multi-core processors

Never mind that fact that there are thousands of organizations that have successfully virtualized using VMware their Exchange environments, saving serious amounts of money. Never mind that these organizations are enterprise class, and run their servers at high utilization levels receiving millions of emails daily, whilst running hourly backups and daily virus scans. Never mind that most tier-1 partners of Microsoft offer qualified support for features such as snapshots for rapid backup/recovery.

Why then is Microsoft “scare mongering” organizations to now move to BPOS – to save money no less? The fact is that there are very very few organizations that truly know the cost of their eMail environment. Therefore, how can one say that it is too expensive to do eMail in-house?

The basis of calculating business cases also varies wildly. It is is very difficult to put a price on the cost of operations for messaging environments – even a messaging team is not 100% utilized – and then to spread this across the cost of the total number of mailboxes.

Indeed the cost of a mailbox per month seems to me not to be granular enough. What is the cost of a message? Who pays for inter-system messages? What about the cost of mailbox storage per month? What is the “true” cost per mailbox per month?

The private cloud, and 100% virtualization of Exchange server in particular, is a chance that most large companies should not really pass by so easily. It is the perfect application to verify the cloud assumptions about elasticity, on-demand and metered usage to get the “true” cost of eMail services. As it is so well understood by internal administrators, a company can experience first-hand:

  • massive reduction in server resources needed with virtualization
  • resource metering per user per billiable time period
  • billing systems alignment to cost of eMail services per user per month
  • operational process alignment for the Private Cloud way of doing things
  • eGRC can be applied and enforced with the necessary controls/tools
  • infrastructure business intelligence for zeroing in on further cost consolidation areas
  • provide basis for your internal Private Cloud – complete with self-service portals and end-2-end provisioning

I always say that eMail is in some ways easier to virtualize than high-end database environments such as SAP. Too much time is lost in the difficulties, and the organization gets too little of Cloud benefits as a result. The time-2-value and order-2-cash processes take too long with that approach.

With Exchange virtualization you can literally get started in a week or two once the infrastructure is on the ground – there are plenty of blueprints that can be utilized.

Why is this important for the CIO?

The CIO has the responsibility for setting IT direction in an organization. Simply following the scare-mongering of either vendors or outsourcing service providers will inevitably force you to move what may be a vital function for new product development out of your organization. Aside from this, there are many issues still regarding data confidentiality, compliance, and risk concerns that need to be tackled.

Personally, I would advise large enterprise shops to look at virtualizing their entire Microsoft estate, starting with Exchange Server. This is not only going to make deep savings, but, as experience shows, also provides better service with less downtime than in the past. You choose the type of differentiated service you would like to offer your users. You decide what services to include, with some being mandatory like AV/malware/spam scanning.

Use this as the basis of creating your Private Cloud, and start to gradually migrate entire services to that new platform, whilst decommissioning older servers. Linux is also part of that x86 server estate, and the obvious questions related to replatforming to the x86 server basis away from proprietary RISC architectures.

Innovation is an area where particular emphasis should be applied. Rather than your IT organization putting the brakes on anything that looks unfamiliar, you should be encouraging innovation. The Private Cloud should be freeing up “time” of your administrators.

These same administrators could be working more on IT-Project liaison roles to speed time-2-value initiatives. They can be creating virtual environments for application developers to get the next applications off the ground using incremental innovation with fast development cycles to bring new features online.

Once you are running all virtual, you will have a very good idea of what things really cost, whether to optimize CAPEX/OPEX levels and compare against the wider industry to determine whether you are offering fair value for IT services to your user community.

Let legislation about data jurisdictions and chains of custody also mature. Push vendors for better terms on per processor licensing, allowing “pay-as-you-use” models to come into play. Not on their terms in their Clouds, but on your terms in your own Private Cloud initially. Remember, there are always choices in software. If Exchange+Microsoft won't cut it for you, then use an alternative e.g. VMware+ZimbraWink

Public Cloud offerings are not fully baked yet, but they represent the next wave of cost consolidation. Recent high-profile outages at Amazon, Google, Microsoft as well as those “un-publicized” failures show seasoned veterans that there is probably another 2 years to go before full trust in Public Clouds is established with the necessary measures to vet Cloud Provider quality. Remember, one size does not fit all!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Holding the Fort against Virtualization – Targeted CPU Power

Imported from http://consultingblogs.emc.com/ published Apr 22 2011
After writing about the impact of CPUs in at least two previous blogs (Multi-Core Gymnastics in a Cloud World and Terabyte…well Everything Tera Actually! (Part 1- the Servers)) about a year ago now, I wanted to post an update to the theme.
It is still surprising for me to meet customers that wonder whether virtualization can possibly help their IT environment. They take the leapfrogging of CPUs in their stride, and miss the point that these leapfrogs are targeted at them in particular. Those “workloads” that could never be virtualized are filed into the “can’t – won’t virtualize” file and then, well, filed!
The rationale that underpinned that decision is not revisited as part of a long term strategy to increase datacenter efficiency or reduce costs – until finance is knocking on the door asking for wholesale cuts across the board.
Recently Intel release the Xeon E7 processor Just when you thought the Nehalem with its 8 cores was really reaching those high-end workloads, think databases and big data computational applications, Intel has just upped the ante.
This E7 is really targeted at those last bastions holding out against virtualization. Being a 10 core processor package, a simple four processor server of 4 rack units becomes a roaring 40 core super server addressing 2TB of RAM. Major brands have moved to adopt this wickedly powerful processor - Cisco with its UCS B440 M2 High-Performance Blade Server or the rack mounted HP DL ProLiant DL900 with 80 processing cores and 2TB RAM. Make no mistake, these are truly serious processing servers for enterprise workloads.
This is a significant change in the evolution of the x86 platform:
  • 2006 - dual-core processors at 65nm
  • 2009 - 4 cores on 65-45nm
  • 2010 - 6-8 cores on 32nm
  • 2011 - 10 cores on 32nm
Notice anything here? I see a concentrated effort by the entire x86 industry to finally bring out the full potential of that architecture to support the mass wave of virtualization sweeping across the IT landscape. There is a lot of talk about compliance, security and what-have-you, but the fact still remains that until very recently not all workloads could be comfortably virtualized.
With the E7, we move towards that magic 99% virtualization rate, with the 1% left for tricky extreme load systems that require some level of re-architecture to be able to fit onto VMware – replatforming.
By the way, Intel is also not the only game in town. AMD also are making first rate processors. The current AMD Opteron 6100 processor (“Magny-Cours”) with its 12 cores, and the one everyone is waiting for the “Interlagos” with 16 cores coming this year. This just underpins how serious the industry is about getting “everything” virtualzed.
What does this all Mean for Virtualization and the Cloud?
Perhaps this does not sound like much, but when measured against what “old x86 servers” could do this really is remarkable. I recall from my own architecture days with Microsoft designing messaging systems for 60,000+ users concurrently. What a horrendous task that was. What came out were rack loads of servers, cables, KVM, network switches, and all that labeling work;-).
With Exchange 5.5 (that is going back a bit), we would have at least 60 mailbox servers for the 60,000 users – 1,000 users safely on a single server of 4 CPU and 128GB of RAM. I could probably get 20+ of those mailbox servers running on a single quad E7 processor system running VMware ESXi as a hypervisor. That means I could collapse perhaps 10 of those old racks with servers and cables into a single 4U server!
This is a sobering thought. With the current generation of common commercial software running in most datacenters this range of consolidation is still possible. Intel and AMD are taking the x86 markets by storm. IT decision makers should examine the macro effect of their actions in the industry:
  • RISC systems are being attacked by large scale-out x86 systems
  • High-end availability features reserved by Intel and HP in the Itanium are creeping into the x86 lines
  • Proprietary operating systems based applications running on RISC-Unix and mainframes are being made available on Linux/Windows that will run well on x86 systems even virtualized
  • Hypervisor vendors are tuning and refining their ability to handle high-end workloads safely and still retain virtualization features of high availability and mobility
  • Consolidation is now not only limited to physical-to-virtual (P2V) but to virtual-to-virtual (V2V) on more capable hardware (what I referred to as the continuous workload consolidation paradigm.
As consolidation ratios have reached such high potential on the x86 platform, the powers that be, have brought the high-end reliability features into the x86 environment. Datacenters that had critical business loads, think ERP and databases, could not really have imagined to move to the lowly x86 platform, and certainly not in a virtualized form.
That has just changed with a thunderclap. These systems compete well at all levels, and their pricing is vastly different than the prices set by the RISC/mainframe industry over the decades.
We are seeing equal improvements in our ability to exploit scale-out topologies, such as with Oracle RAC or EMC Greenplum with its massively parallel processing datawarehouse database. Coding languages are also going the multi-threaded and scale-out route - even that last 1% of workloads could be virtualized.
The x86 processors are not just for servers you know! We are seeing this commodity chip being placed in all kinds of enterprise systems. EMC is using Intel technology heavily in its own storage arrays providing fantastic performance, reliability, and price efficiency. The need for FPGA or PowerPC chips to power storage arrays just dropped further.
Don’t get me wrong, the non-x86 chips are great, with choc-loads of features. However, they are being migrated into the x86 family. I really do envisage that all the features of the Itanium will be migrated into the x86 – and the Itanium was one hell of a workhorse, able to compete with the mighty processor families out there – SPARC, PowerPC, mainframe RISC etc. It would not surprise me to see the Itanium come back as a new generation of x86 with a different name in a couple of years.
Why is this important for the CIO?
Seeing the transformative technologies coming out onto the market, CIOs are increasingly being exposed to the “you could do this” from the market and the “but, we shouldn’t do that” from slow internal IT organizations that are not well adapted to handle change.
I have never seen a time where the CIO needs to apply thoughtful strategies to drive through market efficiencies such as massive consolidation through virtualization and simultaneously balance the need to adopt this mind-set change with internal IT.
Internal IT needs to do some serious soul-searching. It can’t simply stick its collective head in the sand, or “try out “ technologies while the whole technology field has moved on a generation.
I have indicated previously that the CIO has to create the visionary bridge between where the IT organization currently is, and where it “needs” to be to service the business, remain relevant and drive change rather than simply following and dragging its heels.
Where virtualization is concerned, I personally feel that it is necessary to get as many workloads as possible virtualized.
The so-called strategic decisions regarding servers and hypervisor platform are important as enablers, but the goal should be to get maximum abstraction of the workload from the underlying hardware. This is worth the cost and the pain. Once you are there, then you are truly free to exercise sourcing and bargaining power over the physical element suppliers making up your IT landscape.
However, many organizations are still stuck on what server is the best, which hypervisor should I take, what about Amazon, what about Microsoft? Well what about them? Does it not make more sense to rationalize and virtualize your environment to the maximum, so that you can move onto these higher level abstractions?
Whilst that is underway, the IT industry will have found solutions to other areas such as compliance and data locality/security that you can literally step into.
CIOs should seriously consider getting outside help to rapidly move the organization to virtualization on commodity hardware i.e. x86. Be aware that this platform can sustain almost all workloads that are in typical datacenters.
Don’t let large data numbers daunt you. Don’t let internal IT railroad you into doing the same old expensive slow IT as in the past. Don’t get sidetracked. You have friends in the organization – the CFO/CTO/CEO.
CFOs are notorious for instituting widespread change backed up by hard economics. CTOs can stimulate and create the demand patterns that can only be serviced through elastic virtualized environments – Private Clouds. They can balance the hard economic cost cutting with the need to have flexible on-demand pay-as-you-use IT. The CEO wants to ensure shareholder return, and effectively have a successful firm for all stakeholders concerned.
Make the move to the “100% Virtualized Environment”. Push your vendors to ensure they make solutions running virtualized. Push vendors to provide licensing that fits the pay-as-you-use consumption model. Remember there is choice out there. Even for those notorious stacks such as ERP, database, and messaging - push for flexible licensing – otherwise list the alternatives that are waiting for your business if they do nothing!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Virtualization Coming of Age

Imported from http://consultingblogs.emc.com/ published December 3 2010

Many of the enterprise customers we engage with are already asking how to take virtualization and the Cloud to the next level to attain a profound change in their core business. This is beyond simply being agile or having choice. This is about leveraging technology inherent in the virtualization paradigm and providing the same choice as in server virtualization to the consumer markets i.e. the choice of running whatever operating environment simultaneously on the same capable physical hardware platform.

Witness the recent announcement of VMware teaming with LG to sell Android phones capable of being used as a business and personal phone simultaneously. This is cool! This potentially takes VMware from running on hundreds of thousands of corporate devices to millions of consumer devices.

The idea of being able to have multiple capable personalities & functions embedded within a single device has always been a dream of the consumer masses. This type of thinking inspired devices such as the Swiss Army knifeWink.

This will greatly influence the design of processors for mobile devices, in being better and more powerful multi-processing devices than in the past. This in turn will lead to higher device convergence, whilst probably spawning a whole new generation of development environments without ever needing the actually target physical device nearby – much as we do with Web development these days (barring screen resolution issues of courseEmbarrassed.

Now VMware acquired this capability from one of its acquisitions, Trango, who were already working on such a project. This is a major sea change in this particular market. If eventually, iPhones and Windows Mobile platforms are also virtualized, the next generation of mobile devices will drive convergence of a whole slew of technologies into a single physical device – iPhone grade touch screens, large screen real estate, gyroscopic functionality, sports watches/applications,, AMOLED, fashionable device formats and the list goes on and on.

That’s the type of convergence I was referring to, spawned from simply allowing multiple mobile OS’s to run on a single physical device. We see the same in the server and desktop virtualization scene. There is much less discussion of which OS will run, as long as the features are there. The hardware+hypervisor will simply run it!

Consumers and business users now know this. The corporate secret is out. Expect demand patterns to shift faster and more aggressively towards choice of mobile operating systems and productivity applications regardless of the physical factor!

VMware’s SpringSource acquisition heightens this sense of embedding Cloud functionality directly into the application stack, and frankly, putting existing standalone applications into question. That is already fuelling the charge by such large traditional players as SAP, Oracle, and Siebel to have Cloud spanning offerings to cover all ‘just-in-case it proves to be wildly popular’ bets! That’s the correct way to respond I feel, as the writing is literally on the wall.

I mean this mobile hypervisor thing could be seriously big. Think of what it entails. The entire development ecosystem is affected. There could potentially be vAPI’s from VMware coming that might abstract many of the coding primitives in use today. There may be a standardization of languages and compilers or a wholesale move to hardware independent languages (read Java/Open Source).

The idea of using multiple mobile networks from the same device will require initially the ability to have multiple SIM cards operating. However, perhaps it becomes possible to “virtualize” the SIM itself. Then we can get away from the crazy idea that we need to have a SIM chip from our providers. Consumers could literally subscribe to a network, and the hypervized-SIM can be pushed to the mobile device with its embedded chipsets. The call itself would literally be charged on the network it was traversing. Networks could be changed on the fly and service/price hypercompetition would take on a whole new meaning;-)

The consumer would have the control, choice and agility to move directly away at a moment’s notice from one provider to another if the service/price is sub-par (as it frequently is these days).

Heck, the entire mobile OS could be pushed down to the device with its mobile hypervisor from the Cloud directly without any laptop/PC-like device being needed. This sounds like the Cloud itself, but goes further in that perhaps the Cloud end-user devices will come increasingly from those firms that are creating mobile and phone devices today.

This will likely see a scramble from traditional desktop/laptop/netbook manufacturers to the mobile/phone device markets to stay relevant. So called third-world or developing nations may be able in a single giant stride to overtake many of the current incumbents in the West/East technologically sophisticated markets!

CIOs should expect this avalanche of demand from both internal users and development groups clamoring to see how they can embed and leverage this functionality to grab greater market share for their respective organizations. Corporate users will begin to demand having only single phone device that serves business and private needs. We see this already happening in the ‘bring-your-own-laptop’ wave where private individual laptop/PC devices can host secured corporate desktops concurrently.

This brings the discussion of Private and Public Clouds to a whole new level. We are moving towards achieving sustained, durable competitive advantage with ‘sticky’ capabilities preventing or at least making it difficult for market incumbents to emulate the advantage without significant investment. Sounds like good old fashioned business sense is creeping back into the IT and technology worlds to give more bang for the buck.

Virtualization and the Cloud are again applying the market leveling economics and technologies that massively reduce barriers to entry and again put the consumer in the driving seat. Applications will be at the forefront of the charge to entice and engage the consumer. Slow moving firms and those not convinced by virtualization at scale and a complete move to that Cloud base may well find their market relevance diminishing!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


The Role of Defragmentation in the Cloud – Releasing trapped value

Imported from http://consultingblogs.emc.com/ published November 21 2010

One of the questions that we get from clients when moving to the Private Cloud is do we still need to do things as we did in the physical world?

Defragmentation of the file systems (FS) within guest operating systems (OS) containerized in a virtual machine (VM) always comes up. From my own enterprise messaging and database background, this was a very important question to get answered upfront. There could be tremendous negative consequences from not doing proper housekeeping and performing defragmentation (either offline or online while service was running). This essentially represents trapped value to the business.

There are many great vendors out there in this VM defragmentation space, for example, Diskeeper’s V-locity2 or Raxco’s PerfectDisk. They make a good case for defragmentation, essentially pointing to the fact that:

 

1. Many industry sources point to the negative impact of fragmentation

 

2. Fragmentation increases the time taken to read/write a file

 

3. Extra system workload resulting from fragmentation

 

4. Free space consolidation is very important for improving write operations

 

5. Fragmentation contributes to higher I/O bandwidth needs

 

6. Resource contention for I/O by VMs

 

7. VM disks perpetually growing even when deleting data

The fragmentation issue, whether of files or free space, has symptoms analogous to the performance issue VMware identified with misalignment of Guest OS partitions and indeed of VMFS itself. Essentially, much more unnecessary work is being done by the ESX host server and the corresponding storage elements.

The array and vSphere 4.1 features, help reduce the impact of these issues through I/O coalescing and utilizing array cache to get larger sequences of updates bundled – contiguous writing – EMC VMAX can provide 1TByte of cache currently. Multipathing tools such as EMC PowerPath/VE alleviates the increased I/O load through queue balancing and utilizing all paths to the storage array concurrently.

Thin provisioning ensures ‘Just-In-Time’ space allocation to virtual disks. This is heavily enhanced at the array hardware with complementary technologies as EMC's FAST to further optimize storage price/performance economics. This is also changing through the VMware vStorage VAAI such that vSphere is offloading storage to, surprise suprise, storage tiers that simply do the job better.

However, these do not proactively cure fragmentation within the guest OS or indeed at the VMFS level.

Indeed when we start thinking about environments with hundreds of thousands of VMs, such as in desktop virtualization, using VMware Linked Clones, this issue needs to be tackled. Virtual disk compaction represents an important element here to ensure online compaction capability; space reclaimed, and trapped value released.

The ability to use FAST, can support defragmentation scenarios by shifting the workload onto Solid state drives (SSD) for the duration of the high I/O activity. The array will then move the corresponding sub-LUN elements back to the appropriate tier later. Many customers do this with scripts.

Essentially, using Storage vMotion, the VM could be moved to a datastore on high performance disks, and then use the guest OS internal defragmentation tools. Once completed, the VM is storage vMotion’d back to its datastore. Seems easy enough to do for small numbers of machines, but does not scale to Cloud levels - doing this continuously for large VM volumes.

The whole area of scheduling defragmentation cycles, across an entire virtual infrastructure Cloud estate, is also no trivial task. Tools are needed. The current tool generation operate within the Guest OS. VMFS also warrants an examination, although with the ability to utilize 8MB block sizes, there is less fragmentation taking place at the VMDK level – but this is still worlds away from a self-defragmenting file system!

After all, in a busy Cloud environment, the datastores are heavily used. VMs are created and removed. This eventually causes fragmentation. Whether that is an issue for the Cloud environment – well it is still too early to say I believe.

My own view is that some of the best practices regarding defragmentation of the past are still relevant, but need to be updated with the current generation of applications. For example, Exchange 2000/2003 issues are different in scale than in Exchange 2007/2010. It’s the application stack that still counts as that is delivering service to end users. On the other hand, implementing thousands of defragmenting tools in guest OS VMs is also not my idea of fun, and cost may well be prohibitive. Side effects such as large growth in redo log files of any sort when defragmentation takes place also needs to be considered.

I’d like to see VMware create a defragmentation API integrated with the vStorage VAAI APIs for array awareness, much as they have for anti-virus scanning using the VMSafe API. This would allow the defrag engines to hook into the ability of the hypervisor itself and get the array to offload some of these tasks. That would also provide a consistent interface for vendors to design against, and defragmentation can then be a thing of the past - regardless of the guest OS running in the VM. The Cloud should just deal with it, when it is needed!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


VMware VCAP-DCD Datacenter Design Beta Exam & Course

Imported from http://consultingblogs.emc.com/ published November 21 2010

A couple of weeks back, I had the great pleasure of getting an email from VMware asking me to sign up for another dose of pain in the form of the VDCD410 Beta Exam.

For those of you not sure what that is – the full title is (deep breath) – ‘VMware Certified Advanced Professional - Datacenter Design Exam. The short title is VDCD410 based on the published blueprint version 1.2.

Anyway, so I get this notification while in the middle of delivering a demanding client engagement, so very little time to do this thing.

The exam is voluntary, and I thought, let’s just have a little look. So after 2 days of studying hard, I entered the examination room with trembling knees, sweaty palms, and heart palpitations. There is quite some time needed to sign in for the exam. You undergo an 'empty your pockets, sign over your life, digital photo, prove who you say you are' life enhancing experience before being seated at the examination terminal!

Well, what an eye opener that was! 4 hours of testing, over 120 questions of just about every sort you can imagine (no I am not allowed to say what is in the exam).

We’ll have a look to see what the result is (fingers crossed) in the coming weeks. However, I was also scheduled to attend the ‘VMware vSphere: Design Workshop’ course the following week. Yes, I know that the course and exam order is wrong, but….

The course was an interesting course in that it was a discussion/workgroup/scenario building led course with some very dry but informative material. There is no, I repeat, no hands-on work in this course which I think is a great pity. The value of this course is that you get to meet people from many different VMware environments. I principally work in the enterprise segment (30,000+ users with hundreds/thousands of servers and desktops and myriad applications). In the workshop there were administrators from the SMB segment. They have a very different, but pragmatic, view on how to design virtual infrastructure environments using VMware.

Some of the points that were very hotly debated were:

- Business requirements gathering vs. ‘just build it’ SMB approach

- Blades or traditional 2/4U servers

- Cooling vs. what cooling?

As I said, a refreshingly good discussion on the merits of each approach. That’s what this course is all about. There were many administrators on that course and few actual designers which would account for the disconnect in design approaches and the need to be highly practically focused in the SMB environment. Both have their place.

Regarding the exam vs. the course. Well, I actually think that they are pretty close in their aim, but under no circumstances assume this course prepares you for this exam.

Still, I for one am really happy that there is finally a more design focused course. We have many many VMware administrators around the world, and let’s face it, if that is all that you are doing, then you are always going to be quicker at doing those things. Specialization of labour and all that!

From an EMC Consulting point of view, the world is not as easy as configuring a product. The aspirations of the business are the areas that lead us towards generating design models. The potential challenges of tomorrow concern us as well as the issues of the past. Amidst all this is the ever more complex world of mergers and acquisitions by VMware (and others) and how to steer a customer towards a product/technology set that is commensurate with their ambitions. Not easy.

For the exam, there is the usual good advice, read, read, read some more, and make sure that you have the hands-on practical knowledge.

This exam has a different focus area than the VCAP-DCA (Gregg Robertson wrote an excellent blog on this) in that not every command line is needed to be known but how to do things is very relevant.

From my own personal point of view it was interesting to see how design is suggested from VMware and contrasting that to the wider nature of designing enterprise strategy regarding virtual infrastructure, virtual datacenters and the Cloud of course.

In any case, to all those others that did the beta VDCD exam, good luck!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


The Significance of the VMware VCP4 Certification

Imported from http://consultingblogs.emc.com/ published Apr 21 2010

 

I have recently taken the VCP4 examination, and luckily, passed. Being with EMC for only 6 months, I wanted to get some insight on how other people had prepared for this momentous event. Talking with some very experienced colleagues in VMware vSphere ESX4 environments, I had realized that approaches ranged from brain cramming, deep hands-on experience all the way to 'understanding the concepts'.

 

Personally, I took a hybrid approach to learning, supplementing concept understanding with a 'how would one do the following?' questioning approach to make the material a little more interestingSmile This combined with reading a huge list of VMware whitepapers Wink Seemed to have worked for my style of learning.

 

I actually have many exams under my belt mainly in the academic and Microsoft fields of study. It was interesting for me personally to compare the various approaches used in Microsoft exams - because they have been around and evolving for a long time. Now, don't get me wrong here, this is comparing apples to pears, but the basic rationale behind doing the examination and how that gets one ready for the job was the key focus.

 

The Microsoft exams I used as reference were the Windows 2008 MCITP:EA/EMS series (Microsoft Certified IT Professional: Enterprise Administrator, and Enterprise Messaging Administrator with Exchange 2007). These exams used a combination of scenarios (somewhat long winded in many cases - but suprisingly accurate in the field), technical focused questioning, and my favorite the simulation of the actual product screens together with a scenario.

 

I am actually an infrastructure architect for Microsoft Windows 2008 environments and Citrix/VMware/Microsoft Virtual Infrastructures within large scale datacenter environments ranging from 50,000+ users. It is essential therefore that one knows the boundaries for designing for each environment together with a holistic approach taking into account processes and people aspects.

 

After simply passing the VCP4 exam, I realized that this exam is really only an introduction to the topic of vSphere and virtualization. Perhaps I had set my expectations a little too high in terms of what one would learn, with a design/architect background in mind. Now what was interesting was that the VCP4 is suprisingly 'fit-for-purpose'. However, that purpose, as I see it, needed to be put into a better mental frame.

 

The VCP4 gives you the nuts and bolts understanding to be able to run a reasonable sized ESX Cluster environment using vCenter. There is some minor discussion about networking, storage, hardware and administrative processes, but this is vastly different to the indepth grilling one gets doing the Microsoft exams (there are focused exams on each of the major topics e.g. An exam for networking and one for Active Directory Services.)

 

Looking at this with my consultant hat on, I would say the VCP4 does exactly what it aims to do. Whether that is enough to 'get you fit-for-the-job' is another question. Meeting many different clients on a regular basis gives a great insight into how the knowlegde gleaned from this exam is used. I would go further and say that this is probably the first step in realising what virtualization can do for an organisation and get energised around those capabilities. Further, this may well be the first step on the journey to the cloud, in that the first internal discussions about where virtualization is heading are initiated.

 

This plays out at many different levels in organisations. Administrators have a newfound confidence in their VMware vSphere activities. They are able to make those critical suggestions allowing an organisation to gather more latent value from a virtual infrastructure and even technically 'spar' with consultantsCool However, they tend to fall short when a consultant asks them to abstract their knowledge to larger scale operations.

 

Design authorities in organisations tend to be more focused on the 'limits+parameters' of the vSphere environment. This helps guide decisions on what can, and should not be done when bringing in new applications/virtual machines into service. There is certainly more awareness of the IT ecosystem at this level, but still difficulty in structuring this knowledge to scale to large service environments.

 

What interests me is that looking at the service managers, there tends to be a good awareness that things that were difficult to do before, can now be done rather quickly and easily. This is the level where one starts to hear about the ability to manage at scale, support line of business on an operational basis and indeed questions related to the IT Services Value Chain. In other words, some of the key value promises of the cloud.

 

Working further up the chain of command, major themes such as business IT strategy alignment, overall security of services, compliance, governance and investment value extraction strategies start to be elicited. These are the true values of being able to leverage cloud-based ICT services supporting the basic rationale and business strategy of an organisation.

 

All the traditional themes of service management design, service oriented architectures and full-scale virtualization at any and every level tends to drive through value at all layers. In some cases, the virtual solution has intrinsic value that is necessary to extract and highlight for service consumers. For example, a virtual desktop, whilst being of high interest to IT shops, requires some explanation for end users such that the additional value and features can be highlighted and appreciated.

 

To that end, it appears that there is a rather large gap between the nuts-and-bolts information of the VCP4 and the VCXD Designer oriented examinations. A series of smaller more topic-focused examinations in between would ensure that the level of awareness to be 'fit-for-the-job' are addressed. For example, having a VCP4 does not mean that you are well versed in the dark arts of networking. A network engineer is better able to answer those types of questions, but does not necessarily know anything about a vSphere Virtual Distrubuted Switch environment. The same can also be said for vCenter as a powerful console to the virtual estate and operations management disciplines in general.

 

These specific levels of topic-focused examinations also help to stimulate creative discussions around some of the practical issues of scaling cloud infrastructures, particularly if one is also a cloud provider for other organisations.

 

There is a lot to be said for experience of course, however the need to manage and structure knowledge of virtual service operations will allow organisations to be able to extract far more value and be more nimble in service delivery.

 

I understand from colleagues and some of the recent blogs here that VMware has indeed released certifications partially addressing the gap between administrators and designers, and that is certainly welcome news for all!

 

I would be interested to get some feedback on some of the areas that other fellow VCP4'ers feel would warrant a specific examination/certification.

 

Jas Dhalliwal

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.