Organizational Form

Desktop of the Future? What Desktop? - Shake up of the Digital Workspace!

I have been saying for some time that CIO/CTOs should be taking a serious look at the "desktop" of the future. What type of applications are needed, where they are sourced from, form factor and the impact on workplace design.

Tablets are already firmly entrenched in consumer hands, and indeed the hands of professionals. However, they fail to make a dent against the estate of desktop/laptop devices and the associated workplace implications.

Google Glass perhaps represents the next evolution of the desktop->laptop->netbook->tablet saga. This not only challenges the existing notions of mobility, but adds to the rich experience that you and I are demanding.

 

Google-glass-patent-2-21-13-01This is not necessarily a new idea. We have been seeing this in sci-fi movies for decades. Indeed, Sony also had a similar ambition around a decade ago.

However, this design, with modern materials and the massive backing of Google may well succeed en masse compared to previous designs.

 

The ability to link into Cloud based backends, stream applications magically/wirelessly, as well as surely providing input devices of corresponding smartness - maybe something tracking eye movements as we see for disabled individuals.

The really interesting feature that is involved is augmented reality for everyone. Google has been investing heavily in technology for mapping the real virtual in their 3D Google Maps. This investment is really going to pay off by providing a compelling lead.

Augmented reality

The practical application of information gathered all around us, and mapped onto our line of vision will increase our ability to navigate an increasingly complex world.

Such a concept could be easily translated to our business or indeed consumer worlds. Why can a stock trader not look around, cuk in relevant information about world events, annotate verbally on impact on stock price trends, and then have an automated trading engine supporting the actual trading desk.

A consumer could walk into a shopping center and automatically be presented with details of competitive offers in the area/online, trace the production chain to check the articles green credentials, and indeed order online from the same shop that the person is currently in - reducing the need to keep large quantities of stock at hand. And the effect on advertising.....well!

Minorityreportbillboard
An office or knowledge worker would be literally able to grab data, manipulate it, enter data and essentially carry out any other task without the need for the traditional stuffy workplace environments we are surrounded by. Yuo never know, perhaps we could end up with workplaces that look like....(film: Matrix Trilogy - Zion control room)

Zioncontrolroom
Questioning "Accepted Wisdom"

Traditional ideas of the workspace environment being provided for employees and partners should seriously be challenged. It is not necessarily about whether a Microsoft 8 or Ubuntu will be rolled out. It is very much about providing a safe secure working environment, protecting data assets of an organization and increasing efficiency dramatically.

  • Challenge the idea of a desk! Find out what types of work environments are conducive for your staff, ranging from highly creative to highly task oriented individuals.

  • Understand how the walls of the organization can be made safely permeable. How can the technology enable an individual to be out there - physically, socially, online etc? What if voice recognition and real-time language translation capabilities are added?

  • What applications classes are being used? Where are they being consumed, in which form, are they on-premise sourced or from a SaaS/Public Cloud provider?

  • Understand the impact on the physical workspace, its constituent parts, how work and play can be mixed? Cost control and making creative workplaces can be benefit dramatically in this mindshift!
The way the physical world and information is being represented as digital assets that can be manipulated without regard to time and space constraints represents a dramatic opportunity for organization willing to challenge the current accepted wisdom.

Why is this important for the CIO, CTO & CFO?

CIO/CTOs are responsible for the overall digital enterprise architecture and workplace environment. The CFO has the fun work of ensuring sufficient funding whilst maintaining cash flow and a generally healthy financial posture. 

Rather than simply accepting what your IT staff are telling you regarding options for workspace computing based on a legacy view of software and physical computing devices, challenge what could be done by leaping forward.

  • How would this improve the financial situation regarding licensing, physical desktop estate, building space, furniture and employee productivity?

  • Understand what is needed from a human capital management perspective? Legislation and health may well be key drivers or blockers. What would such a workplace mean for talent management and making your organisation a 1st class address for potential graduates/employees?

  • From an enterprise architecture (a middle up-down architectural approach) what would be needed to service such a workplace environment and the implications on business stakeholder aspirations as they expressed currently?

  • Security, physical and digital, needs to be more pervasive but also more invisible. Data assets may needs to be stored in more secure ways in corporate datacenters. Databases are fine, but database firewalls, controlling who is doing what and why, would be required. Ditto for other areas powering the enterprise.
  • Database silos, using workload stacking and virtualization makes sense. However, combining them into the general compute environments together with other general data assets is probably not secure enough. A level of physical security should be ensured.
  • Understand if current investment in virtual desktop infrastructure (VDI) makes sense. Does it capture new digital realities? Understand new demands on mobility. Are you investing in something that literally will disappear?

Many organisations are experimenting at scale with full mobile (as we understand it today - usually tablets, smart phones, mobile video conferencing) technology.

They are experiencing the implications on the cost of providing IT capability, the impact on how employees are exploiting the technology, and overall performance of the organisation.Subtle areas such as training, motivation, social contact implications if working away from a "traditional office environment" need to be factored in.

This is a great chance to innovate and basically get the organization in top form for globalisation where labour factor costs are continually driving employment further offshore (until it comes full circle - the earth is indeed round and not flat) Smileyearth2

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

The Shape of Things to Come!

A lot of what I do involves talking with thought leaders from organizations keen to transform how they do business. In some cases, they espouse thoughts moving along general industry lines or marketing. However, in some cases, there is real innovative thought taking place. I believe firmly innovation starts with questioning the current status quo.

We are bombarded by Intel x86 as the ultimate in commodity processor offering everything one could possibly imagine on the one side, and public cloud on the other hand as the doom of in-house IT centers. It is incumbent on all in this industry to think beyond even the cloud as we know it today.

Questioning "Datacenter Wisdom"

This blog entry in entitled the Shape of Things to come with a clear series of ideas in mind:

  • System-on-a-Chip (SOC) are getting very powerful indeed. At what point are these so powerful that they represent the same order of magnitude as an entire hyperscale datacenter from Google or Amazon with a million machines inside?

  • Why does in-house IT have to move out to the cloud? Why could hyperscale clouds not be built up from capacity that organizations are already putting in place? This would be akin to the electricity grid as the transport for capacity created from multiple providers. Borrowing capacity could be done in-industry or across-industries.

  • Why is there disaggregation of all components at a physical datacenter level (CPU, RAM, storage, networking etc) rather than having assembly lines with appliances/constructs hyper-efficient at a particular task within the enterprise portfolios of services and applications?

  • Why are servers still in the same form factor of compute, memory, networking and power supply? Indeed why are racks still square and datacenter space management almost a 2-dimensional activity? When we have too many people living in a limited space we tend to build upwards, with lifts and stairs to transport people. Why not the same for the datacenter?

I 'm not the only one asking these questions. Indeed, in the industry the next wave of physical manifestation of new concepts is taking place albeit slowly. I wanted to share some industry insight as examples to whet the appetite.

  • At Cornell University a great whitepaper on cyclindrical racks using 60GHz wireless transceivers for interconnects within the rack show a massively efficient model for ultrascale computing.
  • RoundWirelessServerack

  • Potentially the server container would be based on a wheel with servers as cake slice wedges plugged into the central tube core. Wheels would be stacked vertically. Although they suggest wireless connectivity, there is no reason why the central core of the tube could not carry power, networking and indeed coolant. Indeed the entire tube could be made to move upwards and downwards - think tubes kept in fridge like housings (like in the film Minority Report!)
  • MinorityReport

  • One client suggested that CPUs should be placed into ultracooled trays that can use the material of the racks as conductors and transport to other trays full of RAM. We do this with hard disks using enclosures. Indeed Intel does 3D chip stacking already!
    • Taking the Intel 22nm Xeons with 10 cores or indeed Oracle's own SPARC T5 at 28nm and 16 cores as building blocks
    • A 2U CPU tray would allow say 200 such processor packages. This is an enormous capability! For the SPARC T5 this would be 3200 cores, 25600 threads and 11Thz of aggregate power!
    • Effectively, you could provide capacity on the side to Google!
    • A RAM tray would basically allow you to provide 20TB+ depending on how it is implemented (based on current PCIe based SSD cards).
  • Fit-for-purpose components for particular workloads as general assembly lines within an organization would fit in well with the mass-scale concepts that the industrial and indeed digital revolutions promoted.
    • If we know that we will be persisting structured data within some form of relational database, then why not use the best construct for that. Oracle's Engineered Systems paved the way forward for this construct.
    • Others are following with their own engineered stacks.
    • The tuning of all components and the software to do a specific task that will be used for years to come is the key point!

So the technical components in this radical shake up of the datacenter are materializing. We haven't even started to talk about some of te work happening in material science providing unparalleled changes in CPUs (up to 300GHz at room temperature) or even non-volatile RAM totally replacing spinning disk and possibly SSD and DRAM.


Why is this important for the CIO, CTO & CFO?

Customers typically ask whether they should move everything out to cloud providers such as Google/Amazon or private cloud hosters such as CSC/ATOS/T-Systems. Well looking at the nexus of technological change that is almost upon us, I would say that at some level it might make sense to evaluate the mix of on-premise and off-premise resource.

The Cloud is effectively a delivery model - some applications such as email clearly can be in the public cloud - bearing in mind privacy issues. However the capabilities needed for an organization to thrive as expressed in Enterprise Architecture in order to exploit market forces can be expressed in other ways.

  • Server virtualization relies on workloads not taking all the resources of a physical server. You should be questioning why the software, the most expensive components, is not being used to its maximum? Solving server acquisition costs does not reduce costs for you in a meaningful way.

  • Entertain the idea that with acceleration at every level of the stack, information requests may be serviced in near-realtime! The business should be asking what it would do with that capability? What would you do differently?

  • Datacenter infrastructure may change radically. It may well be that the entire datacenter is replaced by a tube stacked vertically that can do the job of the current football field sized datacenter. How can you exploit assembly line strategies that will already start to radically reduce the physical datacenter estate? Oracle's Engineered Systems are one approach for this for certain workloads, replacing huge swathes of racks, storage arrays and network switches of equipment.

  • Verify if notions of desktops are still valid. If everything is accessible with web based technologies, including interactive applications such as Microsoft Office, then why not ensure that virtual desktops are proactively made obsolete, and simply provide viewing/input devices to those interactive web pages.

  • Middleware may well represent a vastly unexplored ecosystem for reducing physical datacenter footprints and drastically reducing costs.
    • Networking at 100+Gbps already enables bringing your applications/web powered effective desktops with interaction to the users' viewing devices wherever they are.
    • Use intra-application constructs to insulate from the technical capability below. Java applications have this feature built-in, being cross platform by nature. This is a more relevant level of virtualization than the entire physical server.

  • Security should be enabled at all layers, and not rely on some magic from switch vendors in the form of firewalls. It should be in the middleware platforms to support application encapsulation techniques, as well as within pools of data persistence (databases, filesytems etc).
Enterprise architecture is fueling a new examination of how business defines the IT capabilities it needs to thrive and power growth. This architecture is showing the greater reliance on data integration technologies, speed to market and indeed the need to persist greater volumes of data for longer periods of time.

It may well be incumbent on the CIO/CTO/CFO to pave the way for this brave new world! They need to be already ensuring that people understand what is impossible now, technically or financially, will sort itself out. The business needs to be challenged on what it would do in a world without frontiers or computational/storage limitations?

If millions of users can be serviced per square round meter of datacenter space using a cylindrical server tube wedge/slice - why not do it? This is not the time for fanatics within the datacenter that are railroading discussions to what they are currently using - or to provide the universal answer "server virtualization from VMware is the answer, and what is the question?".

Brave thinking is required. Be prepared to know what to do when the power is in you hands. The competitive challenges of our time require drastic changes. Witness what is happening in the financial services world with traders being replaced by automated programs. This requires serious resources. Changes in technology will allow this to be performed effortlessly with the entire stock market data kept in memory, and a billion risk simulations run per second!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

Inflexibility of Datacenter Culture - 'The way we do things round here' & Engineered Systems

With a focus on large enterprise and service provider datacenter infrastructures, I get the chance to regularly meet with senior executives as well as rank and file administrators - top-2-bottom view.

One of the things that as always struck me as rather strange is the relative "inflexible" nature of datacenters and their management operations.

As an anecdotal example, I recall one organization with a heavy focus on cost cutting. At the same time the datacenter management staff decided that they would standardize on all grey racks from the firm Rittal. Nothing wrong here - a very respectable vendor.

The challenge arising was:

  • The selected Rittal racks at that time were around 12,000 Euro each approximately
  • The racks that came from suppliers such as Dell, Fujitsu, Sun etc were around 6,000 Euro each

See the problem? 50% savings thrown literally out of the window because someone wanted all grey racks. When we are talking about a couple of racks - that is no big deal. With say 1000 racks we are looking at over-expense of 6 million Euro - before anything has been put into those racks!

Upon asking why it was necessary to standardize the racks, the answers I got were:

  • independence from server manufacturers
  • create rack and cabling rows before servers arrive to facilitate provisioning
  • simpler ordering
  • perceived "better-ness" as enclosures is a Rittal specializ

Sounds reasonable at first glance - until we see that racks are all engineered t support certain loads, and typically optimized for what they will eventually contain. Ordering was also not really simpler - as the server vendors made that a "no brainer". Perception of quality was not validated either - just a gut feel.

The heart of the problem as came out later was that the datacenter would benefit from having everything homogenous. Homogenous = standardized for datacenter staff.

The problem with this is that datacenters are not flexible at all, they have a focus on homogeneity and ultimately cost lots of money to the business financing them.

In an age where flexibility and agility means the literal difference between life and death for an IT organization, it is incumbent on management to ensure that datacenter culture allows the rapid adoption of competitive technologies within the datacenter confines.

Standardized does not mean that everything needs to be physically the same. It involves the processes o dealing with change in such as way that it can be effected quickly, easily and effectively to derive key value!

I indicated the recent trend of CIOs reporting to CFOs and this would have provided financial stewardship and accountability in this case - getting staff and managers to really examine their decision in the light of what was best for the organization.

Questioning "Datacenter Wisdom"

The focus on homogenous environments has become so strong that everything is being made to equate a zero-sum game. This happens in cycles in the industry. We had mainframes in the past, Unix, Linux (which is Unix basically for x86), Windows - and the latest is VMware vSphere and all x86!

Don't get me wrong here - as a former EMC employee - I have the greatest respect for VMware and indeed the potential x86 cost savings.

What is a concern is this is translated to "strategy". In this case the approach has been selected without understanding why! It is a patch to cover profligate past spending and hoped that magically all possible issues will be solved.

After all - it is homogenous. All x86, all VMware, all virtualized. Must be good - everyone says it is good!

See what I mean - the thinking and strategizing has been pushed to the side. Apparently there is no time to do that. That is really hard to believe, as this is one of those areas that fall squarely into the CIO/CTO/CFO's collective lap.

There are other approaches, and indeed they are not mutually exclusive. Rather they stem from understanding underlying challenges - and verifying if there are now solutions to tackle those challenges head-on.

Why is this important for the CIO?

At a time of crisis and oversight, it is incumbent on the CIO to question the approach put on his/her table for Datacenter Infrastructure transformation.

The CIO has the authority to question what is happening in his/her turf.

At countless organisations, I have performed strategic analysis of macro challenges mapped to the IT infrastructure capability of an organization to deal with those changes. Time and again, when discussing with the techies and managers (who were from a technical background but seem to struggle with strategy formulation itself) it was shown that the marginal differences in technologies were not enough to justify the additional expenditure - or that there were other approaches.

Engineered Systems, in Oracle parlance, are one such challenge. They do not "fit" the datacenter culture. They can't be taken apart and then distributed into the slots that are free in racks spread over the datacenter.

From a strategy perspective, a great opportunity is not being exploited here. Engineered systems, such as Exadata, Exalogic, SPARC SuperCluster, Exalytics,  and Oracle Data Appliance represent the chance to change the Datacenter culture and indeed make the whole datacenter more flexible and agile.

They force a mindset change - that the datacenter is a housing environment that contains multiple datacenters within it. Those mini-datacenters each represent unique capabilities within the IT landscape. They just need to be brought in and then cabled up to the datacenter network, power, cooling and space cabilities of the housing datacenter.

There are other assets like this in the datacenter already - enterprise tape libraries provinding their unique capability to the entire datacenter. Nobody tries to take a tape drive or cartridge out and place it physically somewhere else!

Engineered Systems are like that too. Taking Exadata as an example. This is clearly assembled and tuned to do database work with the Oracle Database 11gR2 platform. It is tuned and tweaked to do that extremely well. It breaks down some of the traditional barriers of datawarehouse and OLTP workloads and indeed allows database workloads to be "stacked".

Taking the idea of what a datacenter really should be (facilities for storing and running IT infrastructure) and being flexible - Exadata should literally be placed on the floor, cabled to the main LAN and power conduits and the database infrastructure platform is in place. After that databases can be created in this mini-Datacenter. The capability is literally available immediately.

Contrast this with creating lots of racks in rows where it is not certain what will be in those racks, put VMware everywhere, add lots of SAN cabling as I/O will always be an issue - and then spend ages in tuning performance to make sure it works well.

The CIO should identify this as a waste of time and resources. These are clever people who should be doing clever things for the benefit of an organisation. It would be similar to the idea of buying a car to drive around in OR getting the techies to buy all the components of the car and trying to assemble.

This loss of value inherent in the Exadata when taking a x86/hypervisor route to creating many virtual virtual machines running each a separate database make no real sense in the case of databases.

The CIO can use valuable organizational knowledge gained over many years regarding the functions the business needs. If, as in this example, it is the ability to store/retrieve/manage structured information at scale - the answer should literally be to bring in that platform and leverage the cascading value it provides to the business.

Neither x86 or a certain operating system is "strategically" relevant - in that it is a platform - and the normal DBAs can manage this using tools they already know. This mini-datacenter concept can be used in extremely effective ways and supports the notion of continuous consolidation.

CIOs can get very rapid quick-wins for an organization in this way. Datacenter infrastructure management and strategy should be considered in terms of bringing in platforms that can do their job well with software and hardware well tuned to run together. Further, they should reduce "other" assets and software that is needed.

Exadata does with not needing a SAN switch/cabling infrastructure - it encapsulates paradigms of virtualization, cloud and continuous consolidation. This will drive deep savings and allow value to be derived rapidly.

Challenge datacenter ideas and culture in particular. Agility requires being prepared to change things and being equipped to absorb change quickly!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

A Resurgent SPARC platform for Enterprise Cloud Workloads

Fujitsu has just announced that they have taken the crown in Supercomputer performance breaking past the 10 petaflop barrier. That is over 10 quadrillion operations a second. Seriously fast.

Just when we thought that Intel/AMD and x86 would take over the worldWink this beauty came along. For those interested in the speeds and feeds of the Kei Supercomputer - 22,032 four-socket blade servers in 864 server racks with a total of 705,024 cores!

This is a Supercomputer with specific workload profiles running on there. However, looking at the scale of the infrastructure involved, we are basically looking at multiple large scale Internet Cloud providers literally in this construct.

Traditional Cloud providers may well find themselves with a new competitor, the HPC Supercomputer crowd. Supercomputer are expensive to run, but they have all the connectivity and datacenter facilities that one needs.

Clearly this is a departure from the Linux variants that are currently ruling the virtualization roost like VMware, Citrix with Xen, RedHat with KVM, Oracle VM with Xen (and their Virtual Iron acquisition - one of the largest Xen based Cloud providers). Now we also have Solaris back into the game with its own take on virtualization - Solaris Containers. All of this is probably more focused on enterprise workloads - think BIG DATA, think ERP/Supply Chain/CRM!

 

What does this all Mean for Virtualization and the Cloud?

Currently most thinking for Clouds centers around the marketing of the largest players in the market. Think Amazon, Google for public clouds, and then the extensive range of private cloud providers using underlying technologies based on x86 hypervisors.

Many of the reasons for this scale out strategy with virtualization was centered around getting higher utilization from hardware as well as gaining additional agility and resiliency features.

High end mainframes and high end Unix systems have had resiliency baked in for ages. However this came at a price!

The Solaris/SPARC union particularly within large supercomputer shops provides an additional player in the market for enterprise workloads that still need scale-up and scale-out in parallel. This is clearly not for running a Windows SQL server, or a Linux based web server.

However, massive web environments can be easily hosted on such a construct. Large intensive ERP systems could take benefit, providing near-realtime information and event-response capabilities. One could easily imagine a supercomputer shop providing the raw horsepower .

As an example, the recent floods in Thailand are causing a huge headache for disk drive shipments worldwide. Linking an ERP system with big data analytics regarding the risk to supply chains based on weather forecast information as well as actual current events might have allowed a realignment of deliveries from other factories. That simulation of weather and effect on risk patterns affecting supply can certainly be performed in such a supercomputer environment.

 

Why is this important for the CIO?

When thinking about the overall Corporate Cloud Strategy, bear in mind that one size does not fit all. x86 virtualization is not the only game in town. A holistic approach based on the workloads the organization currently has, their business criticality and their ability to shape/move/transform revenue is the key to the strategy.

An eclectic mix of technologies will still provide a level of efficiency to the organization that a simple infrastructure-as-a-service strategy can not hope to reach.

Simply sitting in a Cloud may not be enough for the business. Usable Cloud capacity when needed is the key. This provides real agility. Being able to outsource tasks of this magnitude and then bring the precious results in-house is the real competitive advantage.

Personally, I am not quite sure that enterprises are quite ready to source all their ICT needs from a Public Cloud Provider just yet. Data security issues, challenges of jurisdiction and data privacy concerns will see to that.

That being the case, it will still be necessary for CIO/CTOs to create the IT fabric needed for business IT agility and maintain the 'stickiness' of IT driven competitive advantage .

Keep a clear mind on the ultimate goals of a Cloud Strategy. Cost efficiency is important, but driving revenue and product innovation are even more critical. A multi-pronged Cloud strategy with a "fit-for-purpose" approach to infrastructure elements will pay in the long run.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Exchange Server 2010 – Keep In-house or BPOS?

Imported from http://consultingblogs.emc.com/ published May 28 2011

Exchange Server 2010 (E2K10) continues to be a baffling phenomenon. We have a raft of features to make E2K10 easier to manage when running at large scale. Integration with the Microsoft ecosystem (ADS, SharePoint, Outlook/Office) is of course excellent. We even hear with the use of the new DAG (replicated mailbox databases) that backups are a thing of the past – don’t need to backup E2K10 apparently.

I have worked with all versions of Exchange, including when it was still MS Mail 3.x. During the years I have been a big fan of this messaging system – mainly due to its integration ability and use of database technology (well JET actually) to provide many features that were novel at the time of introduction. E2K10 is no exception. However, with the rise of the Cloud, and services such as Microsoft’s Business Productivity Online Suite - BPOS or other hosted Exchange service providers, the messaging is becoming increasingly unclear.

What do I mean with that last statement? Well, take a look at the short list of things I hear regularly with clients:

  • Don’t need to backup E2K10 – Microsoft told me so
  • Don’t need fast disks anymore – Microsoft told me so
  • Messaging is not core to my business – will outsource it
  • What added value does the internal IT provide that the Cloud offerings of Exchange cannot provide?
  • Cheaper to host Exchange mailboxes with Microsoft BPOS or another Service Provider

Well having seen in many large organizations what happens when the eMail service is not available, I would argue that messaging services are critical. Indeed, the more integration one has with other upstream applications that utilize eMail, the greater the dependency on a reliable environment. This would indicate that messaging services are core to the business, and indeed may be tightly linked to new service offerings.

The idea of not backing up data, while certainly very attractive, is a little off the mark. There are other reasons for backing up data than simply to cover the “in case” scenario, including compliance, single-item recovery, litigation amongst others that require some idea of preservation of historic point-in-time copies of the messaging environment.

However, the last points regarding cost, and being more effective to host Exchange with Microsoft directly. Well this is really a bit of a sensitive topic for most administrators and indeed organizations. One of the reasons that Exchange is expensive is that it simply could not in an easy fashion cover the needs of the organization in terms of ease of administration, scalability, infrastructure needs, reliability and indeed cost. It does seem to me that Microsoft itself may well be partially responsible for the “high cost” of messaging services.

Why is this Relevant for Virtualization and the Cloud?

Well, many of the cost elements regarding Exchange environments in particular related to the enormous number of dedicated servers that were required to host the various Exchange server roles. The I/O profile of the messaging service was also not very conducive to using anything less than high-end disks in performance oriented RAID groups.

Administration for bulk activities such as moving mailboxes, renaming servers/organizations, backup/restore and virus scanning were not particularly effective to say the least.

Don’t get me wrong, Exchange 2010 is a massive improvement over previous versions. I would put it akin to the change from Exchange 5.5 to Exchange 2000. The new PowerShell enhancements are great, and finally we are getting better I/O utilization allowing us to finally use more cost-effective storage.

Where it starts to all go wrong is when Microsoft starts to lay down support rules or gives out advice that goes against the prevailing wisdom of seasoned administrators:

  • Virtualization on Hyper-V is supported, whilst other hypervisors need to be in their Server Virtualization Validation Program (SVVP)
  • Certain advanced functions such as snapshots, mixing of Exchange server roles in a VM and certain vCPU:pCPU ratios are note supported
  • Low performance disks are fine for messaging functions – what about backup/restore/AV scanning/ indexing etc?
  • Still no flexible licensing options that allow for “pay-as-you-use” or allows cost savings from multi-core processors

Never mind that fact that there are thousands of organizations that have successfully virtualized using VMware their Exchange environments, saving serious amounts of money. Never mind that these organizations are enterprise class, and run their servers at high utilization levels receiving millions of emails daily, whilst running hourly backups and daily virus scans. Never mind that most tier-1 partners of Microsoft offer qualified support for features such as snapshots for rapid backup/recovery.

Why then is Microsoft “scare mongering” organizations to now move to BPOS – to save money no less? The fact is that there are very very few organizations that truly know the cost of their eMail environment. Therefore, how can one say that it is too expensive to do eMail in-house?

The basis of calculating business cases also varies wildly. It is is very difficult to put a price on the cost of operations for messaging environments – even a messaging team is not 100% utilized – and then to spread this across the cost of the total number of mailboxes.

Indeed the cost of a mailbox per month seems to me not to be granular enough. What is the cost of a message? Who pays for inter-system messages? What about the cost of mailbox storage per month? What is the “true” cost per mailbox per month?

The private cloud, and 100% virtualization of Exchange server in particular, is a chance that most large companies should not really pass by so easily. It is the perfect application to verify the cloud assumptions about elasticity, on-demand and metered usage to get the “true” cost of eMail services. As it is so well understood by internal administrators, a company can experience first-hand:

  • massive reduction in server resources needed with virtualization
  • resource metering per user per billiable time period
  • billing systems alignment to cost of eMail services per user per month
  • operational process alignment for the Private Cloud way of doing things
  • eGRC can be applied and enforced with the necessary controls/tools
  • infrastructure business intelligence for zeroing in on further cost consolidation areas
  • provide basis for your internal Private Cloud – complete with self-service portals and end-2-end provisioning

I always say that eMail is in some ways easier to virtualize than high-end database environments such as SAP. Too much time is lost in the difficulties, and the organization gets too little of Cloud benefits as a result. The time-2-value and order-2-cash processes take too long with that approach.

With Exchange virtualization you can literally get started in a week or two once the infrastructure is on the ground – there are plenty of blueprints that can be utilized.

Why is this important for the CIO?

The CIO has the responsibility for setting IT direction in an organization. Simply following the scare-mongering of either vendors or outsourcing service providers will inevitably force you to move what may be a vital function for new product development out of your organization. Aside from this, there are many issues still regarding data confidentiality, compliance, and risk concerns that need to be tackled.

Personally, I would advise large enterprise shops to look at virtualizing their entire Microsoft estate, starting with Exchange Server. This is not only going to make deep savings, but, as experience shows, also provides better service with less downtime than in the past. You choose the type of differentiated service you would like to offer your users. You decide what services to include, with some being mandatory like AV/malware/spam scanning.

Use this as the basis of creating your Private Cloud, and start to gradually migrate entire services to that new platform, whilst decommissioning older servers. Linux is also part of that x86 server estate, and the obvious questions related to replatforming to the x86 server basis away from proprietary RISC architectures.

Innovation is an area where particular emphasis should be applied. Rather than your IT organization putting the brakes on anything that looks unfamiliar, you should be encouraging innovation. The Private Cloud should be freeing up “time” of your administrators.

These same administrators could be working more on IT-Project liaison roles to speed time-2-value initiatives. They can be creating virtual environments for application developers to get the next applications off the ground using incremental innovation with fast development cycles to bring new features online.

Once you are running all virtual, you will have a very good idea of what things really cost, whether to optimize CAPEX/OPEX levels and compare against the wider industry to determine whether you are offering fair value for IT services to your user community.

Let legislation about data jurisdictions and chains of custody also mature. Push vendors for better terms on per processor licensing, allowing “pay-as-you-use” models to come into play. Not on their terms in their Clouds, but on your terms in your own Private Cloud initially. Remember, there are always choices in software. If Exchange+Microsoft won't cut it for you, then use an alternative e.g. VMware+ZimbraWink

Public Cloud offerings are not fully baked yet, but they represent the next wave of cost consolidation. Recent high-profile outages at Amazon, Google, Microsoft as well as those “un-publicized” failures show seasoned veterans that there is probably another 2 years to go before full trust in Public Clouds is established with the necessary measures to vet Cloud Provider quality. Remember, one size does not fit all!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Holding the Fort against Virtualization – Targeted CPU Power

Imported from http://consultingblogs.emc.com/ published Apr 22 2011
After writing about the impact of CPUs in at least two previous blogs (Multi-Core Gymnastics in a Cloud World and Terabyte…well Everything Tera Actually! (Part 1- the Servers)) about a year ago now, I wanted to post an update to the theme.
It is still surprising for me to meet customers that wonder whether virtualization can possibly help their IT environment. They take the leapfrogging of CPUs in their stride, and miss the point that these leapfrogs are targeted at them in particular. Those “workloads” that could never be virtualized are filed into the “can’t – won’t virtualize” file and then, well, filed!
The rationale that underpinned that decision is not revisited as part of a long term strategy to increase datacenter efficiency or reduce costs – until finance is knocking on the door asking for wholesale cuts across the board.
Recently Intel release the Xeon E7 processor Just when you thought the Nehalem with its 8 cores was really reaching those high-end workloads, think databases and big data computational applications, Intel has just upped the ante.
This E7 is really targeted at those last bastions holding out against virtualization. Being a 10 core processor package, a simple four processor server of 4 rack units becomes a roaring 40 core super server addressing 2TB of RAM. Major brands have moved to adopt this wickedly powerful processor - Cisco with its UCS B440 M2 High-Performance Blade Server or the rack mounted HP DL ProLiant DL900 with 80 processing cores and 2TB RAM. Make no mistake, these are truly serious processing servers for enterprise workloads.
This is a significant change in the evolution of the x86 platform:
  • 2006 - dual-core processors at 65nm
  • 2009 - 4 cores on 65-45nm
  • 2010 - 6-8 cores on 32nm
  • 2011 - 10 cores on 32nm
Notice anything here? I see a concentrated effort by the entire x86 industry to finally bring out the full potential of that architecture to support the mass wave of virtualization sweeping across the IT landscape. There is a lot of talk about compliance, security and what-have-you, but the fact still remains that until very recently not all workloads could be comfortably virtualized.
With the E7, we move towards that magic 99% virtualization rate, with the 1% left for tricky extreme load systems that require some level of re-architecture to be able to fit onto VMware – replatforming.
By the way, Intel is also not the only game in town. AMD also are making first rate processors. The current AMD Opteron 6100 processor (“Magny-Cours”) with its 12 cores, and the one everyone is waiting for the “Interlagos” with 16 cores coming this year. This just underpins how serious the industry is about getting “everything” virtualzed.
What does this all Mean for Virtualization and the Cloud?
Perhaps this does not sound like much, but when measured against what “old x86 servers” could do this really is remarkable. I recall from my own architecture days with Microsoft designing messaging systems for 60,000+ users concurrently. What a horrendous task that was. What came out were rack loads of servers, cables, KVM, network switches, and all that labeling work;-).
With Exchange 5.5 (that is going back a bit), we would have at least 60 mailbox servers for the 60,000 users – 1,000 users safely on a single server of 4 CPU and 128GB of RAM. I could probably get 20+ of those mailbox servers running on a single quad E7 processor system running VMware ESXi as a hypervisor. That means I could collapse perhaps 10 of those old racks with servers and cables into a single 4U server!
This is a sobering thought. With the current generation of common commercial software running in most datacenters this range of consolidation is still possible. Intel and AMD are taking the x86 markets by storm. IT decision makers should examine the macro effect of their actions in the industry:
  • RISC systems are being attacked by large scale-out x86 systems
  • High-end availability features reserved by Intel and HP in the Itanium are creeping into the x86 lines
  • Proprietary operating systems based applications running on RISC-Unix and mainframes are being made available on Linux/Windows that will run well on x86 systems even virtualized
  • Hypervisor vendors are tuning and refining their ability to handle high-end workloads safely and still retain virtualization features of high availability and mobility
  • Consolidation is now not only limited to physical-to-virtual (P2V) but to virtual-to-virtual (V2V) on more capable hardware (what I referred to as the continuous workload consolidation paradigm.
As consolidation ratios have reached such high potential on the x86 platform, the powers that be, have brought the high-end reliability features into the x86 environment. Datacenters that had critical business loads, think ERP and databases, could not really have imagined to move to the lowly x86 platform, and certainly not in a virtualized form.
That has just changed with a thunderclap. These systems compete well at all levels, and their pricing is vastly different than the prices set by the RISC/mainframe industry over the decades.
We are seeing equal improvements in our ability to exploit scale-out topologies, such as with Oracle RAC or EMC Greenplum with its massively parallel processing datawarehouse database. Coding languages are also going the multi-threaded and scale-out route - even that last 1% of workloads could be virtualized.
The x86 processors are not just for servers you know! We are seeing this commodity chip being placed in all kinds of enterprise systems. EMC is using Intel technology heavily in its own storage arrays providing fantastic performance, reliability, and price efficiency. The need for FPGA or PowerPC chips to power storage arrays just dropped further.
Don’t get me wrong, the non-x86 chips are great, with choc-loads of features. However, they are being migrated into the x86 family. I really do envisage that all the features of the Itanium will be migrated into the x86 – and the Itanium was one hell of a workhorse, able to compete with the mighty processor families out there – SPARC, PowerPC, mainframe RISC etc. It would not surprise me to see the Itanium come back as a new generation of x86 with a different name in a couple of years.
Why is this important for the CIO?
Seeing the transformative technologies coming out onto the market, CIOs are increasingly being exposed to the “you could do this” from the market and the “but, we shouldn’t do that” from slow internal IT organizations that are not well adapted to handle change.
I have never seen a time where the CIO needs to apply thoughtful strategies to drive through market efficiencies such as massive consolidation through virtualization and simultaneously balance the need to adopt this mind-set change with internal IT.
Internal IT needs to do some serious soul-searching. It can’t simply stick its collective head in the sand, or “try out “ technologies while the whole technology field has moved on a generation.
I have indicated previously that the CIO has to create the visionary bridge between where the IT organization currently is, and where it “needs” to be to service the business, remain relevant and drive change rather than simply following and dragging its heels.
Where virtualization is concerned, I personally feel that it is necessary to get as many workloads as possible virtualized.
The so-called strategic decisions regarding servers and hypervisor platform are important as enablers, but the goal should be to get maximum abstraction of the workload from the underlying hardware. This is worth the cost and the pain. Once you are there, then you are truly free to exercise sourcing and bargaining power over the physical element suppliers making up your IT landscape.
However, many organizations are still stuck on what server is the best, which hypervisor should I take, what about Amazon, what about Microsoft? Well what about them? Does it not make more sense to rationalize and virtualize your environment to the maximum, so that you can move onto these higher level abstractions?
Whilst that is underway, the IT industry will have found solutions to other areas such as compliance and data locality/security that you can literally step into.
CIOs should seriously consider getting outside help to rapidly move the organization to virtualization on commodity hardware i.e. x86. Be aware that this platform can sustain almost all workloads that are in typical datacenters.
Don’t let large data numbers daunt you. Don’t let internal IT railroad you into doing the same old expensive slow IT as in the past. Don’t get sidetracked. You have friends in the organization – the CFO/CTO/CEO.
CFOs are notorious for instituting widespread change backed up by hard economics. CTOs can stimulate and create the demand patterns that can only be serviced through elastic virtualized environments – Private Clouds. They can balance the hard economic cost cutting with the need to have flexible on-demand pay-as-you-use IT. The CEO wants to ensure shareholder return, and effectively have a successful firm for all stakeholders concerned.
Make the move to the “100% Virtualized Environment”. Push your vendors to ensure they make solutions running virtualized. Push vendors to provide licensing that fits the pay-as-you-use consumption model. Remember there is choice out there. Even for those notorious stacks such as ERP, database, and messaging - push for flexible licensing – otherwise list the alternatives that are waiting for your business if they do nothing!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

“Resistance is futile!" – Resistance to Cloud; Change – Friend or Foe?

Imported from http://consultingblogs.emc.com/ published Feb 14 2011

Sorry for the title. I simply could not resistWink  The real title should have been “Change – Friend or Foe?” 

Actually, resistance to the Cloud is still a significant stumbling block in Cloud adoption. The ability of an organization to rapidly reconfigure itself for efficiency, as in Star Trek’s Borg, is quite literally a competitive advantage of epic proportions! 

Many organizations we engage with are still looking for their “perfect cloud”. Ultimately, these organizations are joining the Cloud Party late, if ever. There is no such thing as a perfect cloud. It is important to realize that the Cloud approach is just in its early days of affecting large scale markets, although there are the obvious notable successes such as NetFlix that are using the Cloud technologies to significantly enhance their reach, capability and delivery at speed using less bandwidth. That is distinctive, that is unique – voila Madame et Monsieur, one cup of Competitive Advantage as designed and ordered

It is important to appreciate as Cloud offerings mature, and a greater number of organizations utilize Cloud in some form or another, that the ability to extract additional value will start to diminish. This is the opportunity cost of the Cloud offering. 

What does this all Mean for Virtualization and the Cloud?

Well if the Private Cloud in particular is of such added value to organizations, then why are they not all snapping up Private Cloud infrastructures? Well there are a number of reasons for this. With the current state of the industry still evolving, there is still some confusion over how to move from standard server virtualization (most firms are using that in one way or another - although penetration levels vary wildly) to the full Cloud paradigm of self-service consumption models, large scale automation and elasticity being signature traits.

Expect this to dramatically improve over time with more and more information being made available to the market in “understandable” and “simple” terms. Indeed we are seeing signs of this in early 2011. What were informal tentative enquiries in 2010 are evolving into many concrete large scale Private Cloud implementations. Further, even in relatively conservative economies, such as in western Europe, we are seeing the dramatic uptake of Private Cloud infrastructures. 

However, a more worrying phenomenon is that of IT departments within organizations actually slowing down or blocking the adoption of the Cloud. This is significant, as they are not only putting their own existence at risk by simply refusing to accept change, but more importantly, they may well be damaging the ability of the businesses they claim to serve in cost effectively servicing their markets or indeed expanding into more lucrative areas. 

Private Cloud through virtualization technologies needs to be driven by the business - or a business attuned CIO. The business is the biggest stakeholder in IT, and quite correctly should be driving the agenda for change. In turn the business needs to support and sustain the change that their IT organizations will need to endure to reach the Private Cloud levels of efficiency, control and choice. 

The IT organization can in this case be a proactive change agent, by pushing the agenda, and helping the business understand the ways a Private Cloud will improve their capabilities to service their business lines at lower cost. Should IT organizations not do that, then they risk alienating their business stakeholders. Remember, the business, can quite literally through a credit card, buy Public Cloud services instantly. 

Well typically IT organizations respond with the “but what about the security in a Public Cloud - you need to use a Private Cloud instead” line. Then they stall or drag their heels in implementing the Private Cloud. Fortunately, as I write this blog from sunny San Francisco today, I will be attending the RSA Conference 2011, and they have a great schedule of keynotes and track sessions to tackle just this sort of fear mongering that has been plaguing the Cloud industry. I will be writing up some of the more pertinent sessions in short blogs in the next couple of days. 

However, while security, and more importantly data/information locality play a very important role in Cloud adoption, there are strategies evolving that will tackle these issues head-on coming out now in the form of 3rd party products or indeed from RSA itself (disclosure – yes I am an EMC Consulting employee;-)

 

Why is this important for the CIO?

With this backdrop, the role of the CIO is becoming increasingly challenging. On the one hand, he/she needs to ensure that the business is being served, and served well. On the other hand, the CIO needs to ensure IT is sufficiently proactive and dynamic to support the business’ future needs. 

The CIO has the perfect role to be able to reduce the resistance to change through some clear messaging to the IT organization. Firstly, and very importantly, Private Cloud does not imply a mass reduction in jobs. Cloud is not an outsourcing model for operational staff typically. It is an enabler by making certain things in the IT supply and value chain happen quicker. 

Secondly, the CIO needs to get acceptance from the business to support the Private Cloud initiatives through informing business stakeholders about the benefits the organization as a whole will attain by taking this particular journey. 

Thirdly, and perhaps most importantly, the CIO needs to be seen as the change agent of the organization. Clear identification that change is desired should be communicated. Done proactively, allows the future shape and form of the IT organization to be set. 

CIO’s should be checking with their senior managers regarding the level of resistance to change in general and Private Cloud adoption specifically. If security is the key concern, then speak with a firm that is specializing in Cloud Security. RSA is certainly one of the thought leaders in this respect. There are others. The point is not to dither and get derailed – get informed instead. 

CIO’s should be aware there is significant expertise, experience and case studies of actual usage of Private Cloud. Speak to your peers. Evaluate how they facilitated change. Verify if their Private Clouds delivered significant value to their organizations. 

Looking only internally, assumes that the internal IT organization is more or less perfect, and that they know everything there possibly is to learn. As a consultant I can state easily that this is not the case - there are usually many things that need improving. Most organizations are geared up to maintain and grow. Innovation and improvement are not typically integrated at every level. In some cases operations literally takes up all the time and indeed most of the budget in simply maintaining the status quo. 

External partners can help the CIO to leverage internal IT staff. They can be informed, educated, encouraged, motivated and ultimately enjoy delivering superior value to the business. 

Is that not worth CIOs and IT staff in investing in the Private Cloud? The CIO is the key in fostering change, supporting and nurturing that through the early teething phases and getting IT back on track in servicing and indeed challenging the business to do better and better!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

New Clouds on the Horizon? - Resource Clouds

Imported from http://consultingblogs.emc.com/ published Feb 1 2011

As we start to see the Cloud industry mature, a term has started cropping up everywhere, namely the "Resource Cloud". Resource Cloud terminology is interesting in that it is an early signal for a certain level of maturity where the cloud resources are rapidly approaching commodity status.

I say "some level", because Clouds are far from commodity currently. Nonetheless, this is allowing service providers to understand they are a critical part of the puzzle in being the "Cloud of Clouds" Service Provider, regardless of the virtualization enabling resource pooling of virtual datacenter resources.

In future, the idea of asking for a VMware Cloud, a Citrix Cloud or a Microsoft Cloud does not really make too much sense. It is the capabilities that are needed which should be on the discussion table. So the amount of compute power you need, the memory, network, storage as well as particular functions such as chargeback for the virtual estate become the discussion elements for enterprises wanting to buy or create Private Cloud resources.

Interestingly, this is also a great opportunity for CIOs and IT to change the mindset of the organizations they serve in how resources are requested. Instead of a development group asking for 2 CPUs at 2.2GHz each with a local disk for the operating system and disks for data files, the conversation can and should shift gear. IT can finally engage with developers and integrate their needs by insisting that they no longer quote a great machine specification they just found on the website of their server vendor of choice, but elaborate the level of application concurrency as part of the service they are creating for consumption by their intended customers.

For example:

1. How many threads do you need to have running concurrently (vs. how many cores)?

2. Does your application support increasing the thread count (vs. Immediately asking for the maximum number of cores, after all hot-add of vCPUs is also supported in most hypervisors)?

3. What kinds of working memory are you expecting per thread of application operations? How does that scale?

4. Are there dependencies with other application elements such as a web server somewhere or a database instance?

5. What ports do you need access to for your application from the perspective of the users and the application itself?

6. Are there inter-service dependencies?

7. ....

This is important in crafting the entire working environment for a service (not just the individual applications). The private cloud administrators would then have the ability to package this lot into a container that can be managed as a block of virtual machines - they can all be started together, or shifted as a block to other machines whilst running statefully.

That package description of an entire service environment would eventually be entered into the Service Catalog. Authorized users could then order that package, and this is literally started as an object in its own right, and the service is up and running in minutes instead of days or even weeks in traditional environments, particularly multi-tier application stacks.

Just to reiterate that last point with an example as it represents new capability in the datacenter. An entire SAP environment (web, application, database servers plus any helper/support systems and customization) can be packaged right up. An organizational element (department, division etc.) could order that entire SAP instance from the IT department as, say, a development environment to test new business process functionality, and start up the entire environment. Different divisions within a global enterprise may even have their own SAP instances, and they could do the same. The entire environment would be up and running in minutes – lock, stock and barrel! That is real agility.

What does this all Mean for Virtualization and the Cloud?

Literally that commodity status is approaching rapidly, with all the commensurate economies of scale, functionality taken for granted, and huge cost efficiencies. Indeed, businesses may well be starting to look for the “next big thing”, once they are fully switched into a Cloud.

This is important to the evolution of the Cloud economics. New models may well be needed, and certainly the notion of “cloud fatigue” or indeed “customized market clouds” will start to hit the drawing boards.

We are a long way from that right now. Cloud acceptance ideals vary wildly across the Americas and Europe. The Pacific Rim is pushing very strongly indeed in this direction. The level of interest there, with little or no legacy environment to worry about allows change to be quickly implemented.

Why is this important for the CIO?

The Journey to the Private Cloud through “Resource Clouds” makes the idea of agile IT an easy to understand concept, and intuitively usable. A CIO has the chance to shape those high level discussions, and the IT department can start on creating those Service Catalog elements with commonly used applications stacks packaged on literally a single line item in the catalog.

Discussions such as “dear CIO, I am launching a major marketing campaign across ASIAPAC, can you provide me with the needed resources – something like 100 servers in 3 different countries” – and taking months of discussions/negotiations/justifications/ etc., with development of the campaign taking place in parallel and the poor project manager trying to make sure the needed resources actually materialize, may well be a thing of the past!

This may well morph into the department selecting the service catalog item provided by the IT organization, and an automatic workflow that charges internally for those resources before provisioning the request in the necessary geographies – and taking perhaps just 1 hour to do this.

This is a dramatic change for the business. This goes beyond Business-IT alignment. This is IT driving the business. The old constraints and blocks are removed, and the business may well find it needs to move quicker to respond to the near instant provisioning of resources.

The CIO can also help there as the experience gained, in reshaping IT organizational capability through Cloud technologies, can be leveraged in the core business itself. Indeed business initiatives may also move closer to a Just-In-Time model, rather than the current style of planning a battle campaign - capturing demand at its peak without the investment in stimulating that demand within the early adopters.

The spontaneity of capturing consumer buying sentiment through, say, a media campaign composed of TV/Internet advertisements coupled with a catchy song may well be what is needed to get millions of consumers rushing out and buying the product! Being able to do this rapidly at low cost and hit a target base of millions if not billions has got to be a good thing.

The CIO has the power to enable the organization to capture that type of business by synergy in operations, global collaborative abilities (high fidelity conferencing, VoIP etc.), leaping ahead of what business is currently asking for, and setting the standard in rapid service provisioning, such that the process innovation IT itself has just undergone, can be filtered through to the business. This facilitates and encourages creativity and learning all round.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Windows 7- Virtual Desktops… the way to go (Part 2)

Imported from http://consultingblogs.emc.com/ published August 29 2010

What Does Gartner have to say about this?

 

In the blog Windows 7- To Virtualize or not to Virtualize - that is the question! I talked about a golden opportunity for organizations to move to virtual desktop infrastructures (VDI) instead of following a traditional desktop OS upgrade path.

As an update to this pressing concern for IT departments everywhere, Gartner has weighed in on the issue. observing (quote):

 

"Whether replacing or upgrading PCs, it is clear that Windows 7 migration will have a noticeable impact on organisations' IT budgets,"

"Based on an accelerated upgrade, we expect that the proportion of the budget spent on PCs will need to increase between 20 per cent as a best-case scenario and 60 per cent at worst in 2011 and 2012,"

"Assuming that PCs account for 15 per cent of a typical IT budget, this means that this percentage will increase to 18 per cent (best case) and 24 per cent (worst case), which could have a profound effect on IT spending and on funding for associated projects during both those years."

 

This clearly hits the nail on the head – a standard desktop OS refresh is going to make a serious dent on the ‘CIO purse’ if done following the traditional approaches that are prevalent. This is serious money that could be spent on projects that actually generate direct revenue for the firm!

What does this mean for Corporate IT ?

Essentially, Corporate IT will lock up significant resources in simply performing this upgrade. Those that delay will be well behind the curve (according to Gartner at least), although not being an early adopter has its own intrinsic advantage. The cost of upgrading hardware, either for performance or compatibility reasons, is predicted to be a significant cost.

This is not simple CAPEX, but the opportunity cost of not pursuing those projects/programmes that will directly influence the firm’s bottom profit line. An innovation cost if you will.

Organizations, in a move to innovate, are still employing traditional approaches to solving their desktop OS challenges and the corresponding application stack. However, the dynamics of our time almost mandate a complete rethink of the traditional ‘it’s time to refresh our desktop OS/desktop suites – let’s start a BIG project’ approach. This is a lost opportunity for a CIO to radically shake-up the IT structures/behavior built up over decades in-house.

Such a lock of resources, wholesale disruption of existing revenue generating projects and the outlay in performing the Windows 7 upgrade itself would suggest that ‘alternatives’ to the migration should be seriously examined.

I use Windows 7 extensively – and like it! On the other hand, I absolutely hate the upgrade process, and finding applications that work, new generations of software, and of course the now ‘mandatory’ hardware upgrade – although everyone always insists this is not necessaryWink. I did this for Windows 3.1x/ME/ XP / Vista 7 – well you can understand the desktop OS fatigue. Imagine what the Corporate-OS-Upgrade-Fatigue for thousands of users!

At a time where leanness has been emphasized; cost-saving/cost-avoiding projects having priority over innovative projects; a general focus on optimizing the IT infrastructure and the pairing down of IT personnel numbers would suggest another approach is needed than the current mass exodus from Windows XP to Windows 7.

Companies are not moving to Windows 7 simply due to a lack of support in the future for the platform, or the fact that Windows 7 is ‘shiny’ and attractive. There is an element of ‘anxiety’ in not being left behind. The group/herd instinct to follow the others. However, distinctness and variety are the drivers of sustained competitive advantage and long term relationship-focused revenue streams.

The CIO almost owes it to the organization to ‘think out of the box’, be a maverick, look for distinctiveness, not follow the 'herd' instinct prevalent in the organization and ensure IT is truly a partner to the organization to achieve organizational goals. This includes providing an excellent work environment for employees, and breaking the shackles of the desktop and the men-in-grey attitude still plaguing large organizations. Learn from the smaller guys. Be nimble, agile and creative!

So why is all this important for CIO’s and Organizations?

Innovations streams mandate a series of upgrades to reach that end-state that was originally desired. The main product to roll out should be the capability to have virtual desktops located within the virtual infrastructure that should already have been designed in a rock solid fashion. For exceptions requiring a mobile offline desktop, allow the virtual desktop to be delivered as an offline desktop (but still encapsulated using the virtualization technologies). This can be synchronized back with its online counterpart – replication technologies are really advanced these days. Communication technologies are also powerful and usually available in one form or another. At the very end of the chain, should be the absolute need to have a pure local traditional OS install on the user device. Essentially virtual desktop infrastructures provide an enterprise class functional container for desktop OS’s.

These are the same great features that are partially driving server virtualization – why not use them on the desktop! Applications should also clearly be virtualized to allow them to be independent of hardware and user profiles. They should be simple to upgrade and rollout – with the minimum number of images being used. Why have thousands of variants to support?

This opportunity should also be taken to do a complete cleanup of the existing environment. Windows XP left a lot of rubbish hanging around in registries, file systems, home profiles and questionable applications installed locally.

The new virtual desktop should be lean. There should be a complete decoupling of the OS and the user data/profile. The desktop should be really simple in future to upgrade. Applications should be containerized so that they can run on different OS versions. VMware ThinApp technologies support this notion very well, and Citrix/Microsoft also provide their own encapsulation technology (e.g. App-V).

Indeed the encapsulation technology providing the virtualization should also be independent of the desktop OS and provide complete freedom to choose – that should allow organizations to break out of the straight jacket of the traditional desktop OS vendors. The more desktop OS’s supported in the VDI solution, the better. Mainstream OS support should of course be available, but support for up and coming important variants such as the Ubuntu or Apple variants will allow an organization to rapidly re-engineer their IT to suit the needs of the organization – and negotiate tough discounts on the OSthe prime cost component currently in virtualization solutions.

It does not really matter that there is not a 100% match of VDI solutions to the functionality of local desktop OS installation – there will always be some odd hardware/software that does not quite work out. That is why innovation streams are important – use a hybrid approach with the mass of desktops in the virtual environment.

Over time as more functionality becomes available in VDI solutions (and there is already a 98%+ match), the sheer number of features regarding delivery efficiency and data security will mandate this as the principal solution to deploy a Windows X or whatever desktop OS is your favorite.

Choice and control coupled with efficiency. Doesn’t sound too bad! Welcome to the Private Cloud and Desktop-as-a-Service! Make the jump to VDI now and not lose this golden opportunity!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Multi-Core Gymnastics in a Cloud World

Imported from http://consultingblogs.emc.com/ published June 27 2010
I would like to round up some really important announcements recently in the world of the x86 processors and networking amongst others regarding their influence on the nature of the Cloud, namely:
Cool stuff and very cloud relevant when examining density. Looking at the some of the blogs I posted recently on the subject of scalability, IT renaissance, new paradigms and their shift on our IT perceptions, the nature or competitive advantage based on speed:
The enablers to allow us to spring into the Cloud world are essentially there. We are moving beyond any capability of processing in the virtualization world that we had before.
One vendor after another in the fields of processors, storage, networking, datacenter designers, cabling and standards groups are providing the necessary energy to spring into a vastly different world of IT. So the technology part is more or less on track to deliver the Cloud in all its different flavors.
The practical application of this power is shown can be seen in the links below. The sum of parts and balanced system design paradigms are critical competitive advantage determinants!
Multi-core flexing is a key enabling driver for moving to the private cloud certainly. However, more importantly it is a key driver for the transition from Private to Public Clouds and the associated Cloud Service Providers.
Think of the following in future when we have thousands of cores on a single board and the ability to virtualize a hundred thousand virtual machines on a single physical machine:
  1. Who can afford these extremely dense configurations in the future?
  2. Dense configurations are needed for the economies of scale to drive down cost for the service consumer
  3. Density is a direct factor of the application tiers that can be successfully transitioned to the cloud – think 100TB databases with hundreds of driving database instances!
  4. Does it make sense to have your own IT datacenter in the future when the Cloud Service Providers (CSPs) have access to the massive industrial Cloud factory plants – and can add more as needed quickly?
  5. Is this the precursor of moving from today’s mega-datacenters (with 10K+ servers) to Hyper-Datacenters (millions of processing cores)?
  6. Are we really that far away from the Planetary Cloud: The Rise of the Billion-Node Machine?
Why are these deliberations important for the Cxx decision makers? Well, if IT is morphing to IT-aaS, then one needs to carefully think about the current IT value chain that is being employed in organizations of all kinds. Does it make sense to invest in the current mode of thinking about IT, or should there be a massive investment in cloud paradigms for your organization? The first step, and a very important one it is too, is the move to the Private Cloud.
There is a substantial level of learning that is required from the IT departments all the way to the CIO/CTO of the organization. New ways of providing services, new ways of thinking about your supply chain for IT, new demand vectors in the organization, new ways of thinking of competitive advantage. Perhaps there are other means to utilize IT ‘size and experience’ as a cloud provider yourself within the industry concerned. The very definition of a ‘market’ and ‘market segmentation’ is open to interpretation.
This learning is essential for your organization to determine the best way of using the assets you currently have, and preparing yourself for the jump to Public/Hybrid Clouds and divestiture of your IT capital assets sensibly. What service models you will then employ? Which IT management techniques will be used? Generally being aware of the IT resources that you as CIO/CTO have at your disposal to solve the challenges business expects you to deliver on will be critical knowledge!
Project managers, business transformation leaders, developers and creative artists, product innovators and new product development shops need to think about things on a scale that has been hitherto unprecedented. Forgot the old mantra of ‘oh we can’t do that – would be too expensive, complex, technology does not exist etc.’ to ‘everything is possible – let’s see how we can do that’. All the old barriers to innovation (technologically speaking) are crumbling.
Thinking about the Cloud is very important at the Cxx level right now. This is more than simply marketing, product pushing, or the next hype cycle. The Cloud provides an abstraction model upon which a CIO and the Business in general can have a meaningful dialogue for future deployment of resources (perhaps as far out as the next 10-15 years even).
There is a lot of CIO attention on the ‘how’ to get to the Cloud and indeed with ‘which’ technologies. However, technology is looking after itself very nicely at the moment, thank you. The area where I see few clients expressing intentions is in ‘the Cloud is there now – what do we do with it?’
This mindset shifting, and reframing of existing challenges will be the key to energizing the ability of the CIO/CTO to be able to challenge the Business for new ideas (instead of the other way round currently)
What would the Business would do if all the current constraints, annoyances, IT options were removed?
Business is 100% free to innovate how and when it wants at the pace it wants to. IT is not just the cost center, or simply the enabler for the Business, it is the business! IT is ready to reconfigure assets on the fly (the Cloud’s elastic scalability characteristic) to put those plans into action immediately! IT is ready and open for Business!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.