Previous month:
July 2010
Next month:
September 2010

August 2010

Windows 7- Virtual Desktops… the way to go (Part 2)

Imported from http://consultingblogs.emc.com/ published August 29 2010

What Does Gartner have to say about this?

 

In the blog Windows 7- To Virtualize or not to Virtualize - that is the question! I talked about a golden opportunity for organizations to move to virtual desktop infrastructures (VDI) instead of following a traditional desktop OS upgrade path.

As an update to this pressing concern for IT departments everywhere, Gartner has weighed in on the issue. observing (quote):

 

"Whether replacing or upgrading PCs, it is clear that Windows 7 migration will have a noticeable impact on organisations' IT budgets,"

"Based on an accelerated upgrade, we expect that the proportion of the budget spent on PCs will need to increase between 20 per cent as a best-case scenario and 60 per cent at worst in 2011 and 2012,"

"Assuming that PCs account for 15 per cent of a typical IT budget, this means that this percentage will increase to 18 per cent (best case) and 24 per cent (worst case), which could have a profound effect on IT spending and on funding for associated projects during both those years."

 

This clearly hits the nail on the head – a standard desktop OS refresh is going to make a serious dent on the ‘CIO purse’ if done following the traditional approaches that are prevalent. This is serious money that could be spent on projects that actually generate direct revenue for the firm!

What does this mean for Corporate IT ?

Essentially, Corporate IT will lock up significant resources in simply performing this upgrade. Those that delay will be well behind the curve (according to Gartner at least), although not being an early adopter has its own intrinsic advantage. The cost of upgrading hardware, either for performance or compatibility reasons, is predicted to be a significant cost.

This is not simple CAPEX, but the opportunity cost of not pursuing those projects/programmes that will directly influence the firm’s bottom profit line. An innovation cost if you will.

Organizations, in a move to innovate, are still employing traditional approaches to solving their desktop OS challenges and the corresponding application stack. However, the dynamics of our time almost mandate a complete rethink of the traditional ‘it’s time to refresh our desktop OS/desktop suites – let’s start a BIG project’ approach. This is a lost opportunity for a CIO to radically shake-up the IT structures/behavior built up over decades in-house.

Such a lock of resources, wholesale disruption of existing revenue generating projects and the outlay in performing the Windows 7 upgrade itself would suggest that ‘alternatives’ to the migration should be seriously examined.

I use Windows 7 extensively – and like it! On the other hand, I absolutely hate the upgrade process, and finding applications that work, new generations of software, and of course the now ‘mandatory’ hardware upgrade – although everyone always insists this is not necessaryWink. I did this for Windows 3.1x/ME/ XP / Vista 7 – well you can understand the desktop OS fatigue. Imagine what the Corporate-OS-Upgrade-Fatigue for thousands of users!

At a time where leanness has been emphasized; cost-saving/cost-avoiding projects having priority over innovative projects; a general focus on optimizing the IT infrastructure and the pairing down of IT personnel numbers would suggest another approach is needed than the current mass exodus from Windows XP to Windows 7.

Companies are not moving to Windows 7 simply due to a lack of support in the future for the platform, or the fact that Windows 7 is ‘shiny’ and attractive. There is an element of ‘anxiety’ in not being left behind. The group/herd instinct to follow the others. However, distinctness and variety are the drivers of sustained competitive advantage and long term relationship-focused revenue streams.

The CIO almost owes it to the organization to ‘think out of the box’, be a maverick, look for distinctiveness, not follow the 'herd' instinct prevalent in the organization and ensure IT is truly a partner to the organization to achieve organizational goals. This includes providing an excellent work environment for employees, and breaking the shackles of the desktop and the men-in-grey attitude still plaguing large organizations. Learn from the smaller guys. Be nimble, agile and creative!

So why is all this important for CIO’s and Organizations?

Innovations streams mandate a series of upgrades to reach that end-state that was originally desired. The main product to roll out should be the capability to have virtual desktops located within the virtual infrastructure that should already have been designed in a rock solid fashion. For exceptions requiring a mobile offline desktop, allow the virtual desktop to be delivered as an offline desktop (but still encapsulated using the virtualization technologies). This can be synchronized back with its online counterpart – replication technologies are really advanced these days. Communication technologies are also powerful and usually available in one form or another. At the very end of the chain, should be the absolute need to have a pure local traditional OS install on the user device. Essentially virtual desktop infrastructures provide an enterprise class functional container for desktop OS’s.

These are the same great features that are partially driving server virtualization – why not use them on the desktop! Applications should also clearly be virtualized to allow them to be independent of hardware and user profiles. They should be simple to upgrade and rollout – with the minimum number of images being used. Why have thousands of variants to support?

This opportunity should also be taken to do a complete cleanup of the existing environment. Windows XP left a lot of rubbish hanging around in registries, file systems, home profiles and questionable applications installed locally.

The new virtual desktop should be lean. There should be a complete decoupling of the OS and the user data/profile. The desktop should be really simple in future to upgrade. Applications should be containerized so that they can run on different OS versions. VMware ThinApp technologies support this notion very well, and Citrix/Microsoft also provide their own encapsulation technology (e.g. App-V).

Indeed the encapsulation technology providing the virtualization should also be independent of the desktop OS and provide complete freedom to choose – that should allow organizations to break out of the straight jacket of the traditional desktop OS vendors. The more desktop OS’s supported in the VDI solution, the better. Mainstream OS support should of course be available, but support for up and coming important variants such as the Ubuntu or Apple variants will allow an organization to rapidly re-engineer their IT to suit the needs of the organization – and negotiate tough discounts on the OSthe prime cost component currently in virtualization solutions.

It does not really matter that there is not a 100% match of VDI solutions to the functionality of local desktop OS installation – there will always be some odd hardware/software that does not quite work out. That is why innovation streams are important – use a hybrid approach with the mass of desktops in the virtual environment.

Over time as more functionality becomes available in VDI solutions (and there is already a 98%+ match), the sheer number of features regarding delivery efficiency and data security will mandate this as the principal solution to deploy a Windows X or whatever desktop OS is your favorite.

Choice and control coupled with efficiency. Doesn’t sound too bad! Welcome to the Private Cloud and Desktop-as-a-Service! Make the jump to VDI now and not lose this golden opportunity!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Windows 7- To Virtualize or not to Virtualize - that is the question!

Imported from http://consultingblogs.emc.com/ published August 21 2010

Whether 'tis nobler to rollout a standard Windows 7 desktop,... OR to take arms against a sea of troubles,
And by virtualizing desktops end them?

 

Many of the current discussions we at EMC Consulting (Cloud & Virtual Data Center Practice) are having with IT Managers, CIOs, CTOs and Architect/Designers are typically focused on understanding the Cloud notion, its consumption and management models, and of course ‘how to build one Cool. Frequently the ‘what does it mean for us?’ pops up.

Depending with whom you’re speaking the answer will vary in terms of granularity. An administrator asks regarding daily activities, an IT Manager in terms of service delivery and orientation, and the Cxx level is focused more on the realization of sustainable competitive advantage of Business IT amongst other themes.

With the current need to phase out Microsoft Windows XP on the CIO radar, engaging IT resources/personnel for the foreseeable future, and so many other areas of IT strategy still to realize, the move to Microsoft Windows 7 is rather significant. Many are taking the approach of a ‘simple’ desktop operating system (OS) upgrade. There are yet others utilizing the opportunity to replace parts of their desktop estate with long overdue PC/laptop replacements. These strategies are fine if the end result is simply to get rid of Windows XP and come back into the Microsoft ‘circle of trust’. Compounding the situation is the application stack (Ask-not-what-you-can-do-for-your-cloud-but-what-your-cloud-can-do-for-you) - and yet another migration.

Windows 7, different perhaps from the advent of Windows Vista in terms of its timing, comes at a turning point in the IT industry. The desire and interest to move away from traditional models of IT, resource consumption, and device form factors has never been so strong. Indeed the very notion of a desktop operating system is being challenged. We often hear in envisioning workshops this very same thought and if it can be done right now! Not an easy question to answer.

Don't get me wrong here. I am myself an ardent user of Windows 7, coming from Vista (yes I installed that tooEmbarrassed), and of course the venerable XP. The functionality is fine, and Microsoft have done a good job of creating something useful. However, it is not really Windows 7 that I use daily. It is the applications and the browser that I mainly use. Certainly then, the OS could perhaps be a bit leaner - or as some virtualization vendors are doing - practically remove the need of an OS by creating bare-metal desktop hypervisors (Citrix and VMware initially).

Corporate IT Missing A Trick?

Based on the macro movement in the industry, the Cloud tsunami, Everything-as-a-Service and unprecedented levels of connectivity to the Internet, perhaps the idea of rolling out Windows 7 needs to be thought of in a different light.

We have discussed with many organizations embarking on virtual desktops as a part of their desktop estate mix, if Windows 7 should not indeed be treated as an innovation stream. One stream of many that would herald the move to the ‘digital-nirvana’ user workspace end-state (which is of course different for every organization).

By treating Windows 7 as an innovation stream, a collection of features desirable for an organization to possess, we come closer to the idea of Windows7 being a stepping stone on a path. The implication is that constant change will be accompanying the ‘desktop’ estate for all organizations - in that new features can be bundled and released rather than a colossal OS upgrade.

The very term ‘desktop OS’ is starting to look tarnished and is in all probability a complete misnomer these days.

EMC Consulting has a very strong practice supporting the migration to Windows 7, and together with customers, a different product mix is being implemented. Large swathes of virtual desktops hosted in a private cloud are being rolled out, with some use-cases mandating a traditional local install approach in the interim. However, in most cases the applications are being virtualized to ease the move to delivery via Cloud technologies. Some applications have already moved lock-stock-and-barrel to Private/Public Clouds.

How does this pan out with the ‘desktop OS’ developers?

Well. Microsoft itself is planning to refresh desktop OS’s more frequently than in the past (Windows x details were leaked onto the Internet this year). Microsoft itself is starting the move to Cloud offerings in partial/full form through its Azure offerings amongst others to come. Microsoft Exchange Server, long the province of corporate IT, is itself being considered to be ‘handed over’ to Microsoft in the form of Exchange Hosted Services (EHS). This of course leads to the question of whether there are other email/collaboraton technologies that can be used? Microsoft is embracing this sea-change after a fashion. It does not really have a choice anymore!

It looks increasingly as if change is going to be the new norm. Change is good – and the ability to rapidly change and reconfigure resources is a fundamental competitive advantage in an ever more dynamic cyber-verse.

Essentially, the change to an innovation stream starts to focus organizations internally on features and capabilities they value - not which version of a desktop OS they are installing next. The capability set essentially underpinning their varied business needs is identified and pursued.

In the move to the virtual desktop, this starts to yield real benefits in a very lean composed desktop (separated user profiles, applications, base OS). Initially we had the first wave of this in the form of server based computing models simply shipping out a shared Windows desktop surface. This was inflexible and required great operational control to ensure adequate features for all users (e.g. Citrix MetaFrame/Presentation Server/XenApp, Microsoft Terminal Services/RDS). This model still has its place in organizations today.

Virtual desktops in comparison, being wholly independent of other users’ workspaces, allow a greater level of flexibility, allowing users to continue to be productive in traditional ways, innovate and indeed generate new methods of working. This wave seems to be making a home for itself in the Private Cloud. Offerings such as the VBlock support near on 10,000 concurrent virtual desktops. This is unprecedented in a single offering. These desktops can be created for all users in seconds/minutes from scratch, and remain always patched, protected, and available 24 hours/day accessible from anywhere! The level of control from Corporate IT and level of freedom for users is a real boon in management terms.

We are seeing in parallel the rise of ‘Platforms and Applications as-a-service’ models in full swing on the Internet. Indeed it is possible to get a pre-purposed virtual desktop with the latest greatest Windows 7 (or Linux, Apple OS etc.) as a complete remote service.

Extend this further to the application stack above the OS, and we start to see exponential gains in manageability and long-term sustainability in terms of user-experience and operations. This is being felt in the wake of offerings such as Salesforce.com. This in turn is being extended to corporate applications being built on these platforms. There is choice here with Google, Microsoft, Amazon and others providing similar capabilities. The speed of building new business applications is remarkable in that the time-2-value has shrunk drastically! Good for consumers and definitely good for business!

We haven’t yet talked about how this desktop is consumed. Ever more capable devices are emerging (netbooks, tablets, iPad, iPhone, smart phones etc.) finding captive audiences initially using these virtual desktops for private purposes, and over time morphing to fully-fledged personal productivity assets equally capable of being plugged in at ‘work’!

This brave new world indicates a net movement away from stuffy large desktop OS deployments on the narrow palate of PC/notebook hardware that organizations are typically still working with.

The consumer experience is driving the need for change within organizations. Organizations everywhere are waking to the clamor of their own users wanting a better experience in the digital workplace (after all they can easily afford a better experience as a consumer – so why can’t the firm do it!).

So why is all this important?

Well if innovation is the lifeblood of an organization, then all the available means to ‘spark innovation should be exploited. By reframing the traditional desktop OS deployment approach, an organization may be able to fundamentally change the digital workplace.

There are plenty of examples of companies working to redesign office layouts, use more capable telephony-over-IP, manipulating light and environmental conditions to put the brain ‘in-a-better-state-of-mind'Wink These approaches are working (Back in 2007 this is how things were - Google Headquarter - Amazing Work Place 9/19/07 )! Why would we not do the same for the ‘desktop operating system’?

Thinking about that long term transformation of an organization, every person has at least one good idea in them. The idea may be the one that drives your industry for the next decade. Well is that not worth putting in a little more thought about the Windows 7 migration?

Is it not worth thinking about virtualizatiing your applications? Is it not worth thinking about how the jump to the Cloud will be made for desktops? Does it not make sense to virtualize now to allow some/all of those benefits to stream into an organization?

Some careful thinking now – moving away from the traditional 'administrator/IT group's worldview in ‘rolling out yet another desktop OS & the time is not right for Cloud - there's no other way’, and keeping your eye firmly on the ‘big picture’ will invariably be a sure bet!

Cloud is here today! The desktop is a prime candidate to consider for mass virtualization, and a complete rethink about ‘desktop+applications’ should be on the Corporate IT radar!

 


-------------------------------------------------------------------------------------------------

BTW - this was written in Germany, connected to a virtual desktop hosted in Ireland, through a home ADSL link, using virtualized applications located someplace in America, and finally posted on the Blog which is located...it does work wellBig Smile!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Mayday, Mayday, Cloud under attack!

Imported from http://consultingblogs.emc.com/ published August 1 2010

In several recent discussions with security groups and agencies within private and public sector, the issue of security has of course come up. There are several facets to security, but the specific issue was regarding ‘breaking out of the hypervisor’ that facilitates virtualization and zero-day attacks. These are certainly points to consider amongst the euphoria surrounding Cloud models.

Cases, such as the Google attack in January 2010, reiterate the need for vigilance in the large scale constructs known as Public Clouds. That Google was able to determine attack vectors and attack surface indicate a high level of skill and diagnostic tool capability. However, note, this was ‘after-during’ the attack, not ‘before’.

One of the reasons that many organizations would like to go Private Cloud, ahead of the expounded benefits of the Public Cloud model, is exactly the sense that their data and systems are secure behind their own perimeter networks. Make no mistake; Public Cloud providers also have perimeter networks, firewalls and DMZs (demilitarized zones) as thick as picket fences around their infrastructure. Nonetheless, much of this is rendered useless, or at least of limited value, when an innocuous eMail or PDF document is sent to a user, who unwittingly unleashes ‘the Mother of all Malware!’.

These types of attacks can subvert the ‘goodness’ of the Cloud to absolutely insidious uses – imagine millions of systems in a Cloud Provider environment attacked, subverted, and then infiltrating other Clouds and consumer PCs – a veritable storm of trouble!

It is important to note that the attacks themselves are not always sophisticated attacks, but with the use of techniques commonly used to protect environments and data, such as encryption, malware/botnets can also start to cover their traces and make themselves ‘protected’. Panda Labs has the Mariposa-botnet analysis example of non-sophisticated but effective botnet activity. It is difficult to differentiate a program as being good or bad upon sighting aloe. A botnet can also be a benign entity in the form of a distributed program running across many different systems.

So what is happening in the Cloud to protect against such attacks?

Well actually, a lot of investment and research is going into precisely preventing or at least limiting the reach of these attacks. RSA, an EMC division, has its enVision suite that automates security event analysis and remediation across large networks. The ability to ‘perform parallel processing, correlation and analysis’ allows the security framework itself to use in parallel the same Cloud resources that are potentially under attack.

It should be clear that only software/hardware enhanced programs running on this mega scale allow us to proactively manage the environment from a security and risk perspective. The RSA enVision flyer says it all. The ability to analyse and draw in ‘tens of thousands of device log' entries in near real-time is an ability of paramount importance. To this is added the ability to use policies and advanced heuristics to determine ‘suspicious’ behavior.

There are other products and vendors out there offering functionality targeted at finding the proverbial ‘needle-in-a-haystack’ that points to an attack, and being able to lock this down asap. Network providers such as Cisco are offering network level awareness of attacks, and literally locking down the attacks ability to use the network to propagate. Cisco calls this MARS, the Cisco security monitoring, analysis, reporting system. Again, a very powerful tool that is integrated at all levels of the computing environment.

Effectively, even if a hypervisor has been compromised, the network itself would be able to potentially lock-out that node from the entire network. This blocks the ability to propagate the botnet/malware further. VMware, Microsoft, Citrix, Novell, RedHat, Oracle together with the myriad other hypervisor manufacturers are actually pretty careful to ensure that their code is rock solid. The fact that the dedicated hypervisors actually are focused on just the activity of providing virtualization means that they can be well locked down. There are many additional best practices to further harden the systems that are published.

VMware provides the VMsafe interfaces that allow virus scanning engines to interface to the hypervisor itself and prevent infection before reaching the virtual machine itself. Large scale protection paradigms are now beginning to materialize. This agent-free approach starts to link into the policy based engines that drive the clouds of today, such that new protection schemes, rules and filters can be applied literally across thousands of machines to stop zero-day attacks from gaining any form of ascendancy in the Cloud infrastructure.

The use of software firewalls that can actually process huge amounts of information with the increasing core density in the underlying infrastructure (‘Multi-Core Gymnastics in a Cloud World’) allow virtual machines to be protected on a per VM basis. This actually provides higher protection than in the physical constructs they replace. Policy driven engines can update firewall rules across millions of virtual machines in a very short space of time.

Other operating system enhancements, silicon-enabled security baked into the chip, protocol enhancements in IPv6, wide spread easily accessible encryption are paving the way for a myriad of ways to protect systems, the data, unauthorized access and detailed audit trails (privacy is still an issue here) to follow attacks to their source.

In conclusion, there are many existing and new technologies being integrated directly into the fabric of virtualization. This in turn is making the ability of security analysts in pinpointing and locking down attach vectors en masse far more efficient.

The best thing of all is the unprecedented level of integration into the hypervisor itself that allows these multi-layered defense mechanisms to be easily deployed and managed. So yes, Clouds can be safe to conduct business – but caution still needs to be applied. A lot of good common sense and expertise built up in organizations over time is still valid. However, good procedures, design, implementation and operations are still keystones of safety.

The Cloud paradigm itself is finding ways of protecting itself. The side-effects of cloud usage are themselves of benefit. With the move to virtual desktop and servers, the ability for an organization to patch its systems frequently without scheduling changes over months, has allowed one of the principal attack vectors, the compromised PC, to be protected.

That protection is gradually shifting down to the physical PCs in the form of free virus scanners (better than none at all – yes that still exists;-) reduced application software footprint and the use of SaaS offerings with frequently updated protection filters is slowing down the spread of infection.

The Cloud paradigm, and the evolving security eco-system, indicates large scale infrastructure can protect itself. The Private Cloud still prevents a very strong case for privacy, security and compliance as it is felt to be inherently secure. Clouds still need to be able to protect themselves and other Clouds at the same time. This stops/slows the movement of infection/malware until appropriate identification and removal countermeasures can be deployed. Get your feet wet in the Private Cloud first, security-wise, and then consider the Public Cloud. This is a perfectly valid Journey-Route to the Cloud!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Hollywood & NASA Paving the Way: The Rise of the Billion-Node Machine Part 2

Imported from http://consultingblogs.emc.com/ published August 1 2010

In a previous blog 'Planetary Cloud: The Rise of the Billion-Node Machine', I described a possible evolution of Public Clouds into Planetary IT complexes, where billions of nodes are available planet-wide to work on some of the most challenging problems of our times – what I affectionately called the Billion-Node Machine.

For those of you that think this is far in the future or just plain nonsense, think again! While the private sector is working on using Cloud technologies to take advantage of scale economies and effective working models to underpin IT strategy, there are some institutions on this planet already taking the leap forward into these billion-node futures.

NASA, a well-known pioneer around the world, has been collaborating with Microsoft to bring us the Worldwide Telescope project. The aim was to bring ‘Mars to our planet’ in the form of high resolution imagery processed through Nebula, the NASA cloud. There is an interesting video of the project background containing interviews with the engineers and designers.

Such projects are difficult to accomplish with ‘normal’ scaled Clouds - we are talking here about processing hundreds of Terabytes in near real-time, and that is just the tip of the iceberg. If that imagery needs to be rendered in different formats for all devices / formats available on the Internet then this could be many times the original imagery size from the Mars mission beamed to Earth through space. NASA says there are '15,000 tera-pixel photos that are then mosaic-ed into a half billion PNG images via the Nebula cloud'.

Nebula is interesting in that the needs of NASA are so extreme, extreme solutions are required to solve the problems that are quite literally ‘out of this world’. Having started out trying to copy the Amazon EC2 Cloud infrastructure; finding it too limiting to accommodate their innovation needs and lacking the unrestrained free-hand to change things as needed; the Ubuntu Enterprise Cloud technology was then used. However, this was also not open or scalable enough to support the NASA extreme needs. NASA is now working with Rackspace and others to create the OpenStack platform.

Just that we understand the desires that NASA has in mind to get a sense of scale. NASA is working to build an infrastructure cloud spanning over ‘1 million physical machines and 60 million virtual servers’. That is a far cry from the thousands of virtual machines organizations are currently running. Even the Google infrastructure is small against that backdrop, at least for the moment.

This convergence of HPC (High Performance Computing – Grid) worlds, where systems with a single system image of over 10,000 cores work simultaneously on a problem, and the Public Cloud as we know it for typical computing/storage workloads seen in companies together with consumer demands is creating a new breed of Cloud – the Billion-Node Machine starts to take shape!

To see how that is being used in real life, one needs to look outside of typical corporate boardrooms. Some famous names apart from NASA are already taking advantage of such elastic and massively scalable environments. DreamWorks, the creator of such wonders as Shrek and Madagascar, is taking advantage of supercomputer class resources in the New Mexico Applications Centre (NMCAC). This may supersede its own impressive infrastructure farms consisting of thousands of servers.

Key enabling infrastructure such as LambdaRail optical network and the supercomputer Encanto, based on more than 3,500 Intel quad-core processors, 28TB of memory and 172TB of storage. That is over 14,000 cores able to work simultaneously on a problem – or sliced and diced as needed in real-time to host virtual machines. This is not the largest or most powerful supercomputer resource. Nonetheless it costs around 2 million dollars annually to power and cool this system alone. Well beyond the reach of most mere mortals.

Just to put those studio resources into a frame of reference, that everyone has access to, take the creation of some of the animations produced over the last half decade. Computers used in the development of the 2006 Pixar Cars film were four times faster than those used in The Incredibles (2004) and 1,000 times faster than those used in Toy Story (1995). To build the cars in the movie, the animators utilized compute platforms that one would use in the design of real-world automobiles - convergence!

Avatar, the pre-cursor of the 3D film wave we are currently experiencing, was produced in many different formats for viewing in 2D/3D, and literally translated human movement into a realistic digital experience (instead of trying to copy human movement). This film required extensive computer tooling; the exact mix is of course secret ;-) Such films were simply impossible to create even some years ago.

The billion-node machine allows other ideas to see the light of day. It is an innovation catalyst, a tool to allow people to flex their minds on scales previously impossible (not unimaginable though). A chance to bring computing resources to every point on the globe, and have consumers/users literally only needing a connection and a display/input tool – even the need for powerful local CPUs is bypassed.

If the dreams, that are films, are being created, rendered and produced in formats on the fly in such a way, then perhaps the future will belong to advanced digital technologies presented in life-like holograms all powered by a billion-nodes with real-time movements.

Perhaps with such resources, each household will have its own virtual datacenter that can grow/shrink as needed. Add to that advances in artificial intelligence and voice recognition and perhaps multiple holographic images can be created able to interact in real time, with memos dictated in real-time bypassing crude email.

This would be possible to use as full featured telepresence system by projecting millions of people around the world into meetings without them needing to fly/drive physically to the locations. That is a far cry from the crude telepresence systems we currently have. If those holograms could also move physically around buildings and the imagery sent back to the person being projected then that is as good as being there (Emperor Palpatine as a hologram moving on a walking robot in Star Wars: Phantom Menace - was good enough for him).

Perhaps that is the 'killer app' for the billion-node machine. The ability to be able to code/develop/share ideas literally face-2-face is an amazing competitive advantage, well beyond simple outsourcing. Perhaps the costs of using local talent are not as high as currently thought – outsourcing may disappear as a term in general use. We don’t have the matter transporter yet, but this is pretty close to that experience!

Many have seen the holodecks in Star Trek – Next Generation/Voyager (with Data enjoying a poker game with Albert Einstein, Sir Isaac Newton and Steven Hawking), or the amazing utilization in Minority Report (2002) that had a live demo of some of the technologies in 2010!. This is really cool and great to understand the critical role of user interfaces!

I used the word 'perhaps' very often as that is the space where innovation and technology cusps meet. Are these ideas really so far-fetched when that level of computing power starts to become available? The disruptive market effects are already being felt with NASA undertaking support of private sector joint initiatives with the Worldwide Telescope – looking 'out there' as it were.

What will happen when this machine is turned to looking inside – the atomic level of reality? What will happen in the gene technology worlds, and its complementary effects with nanotechnology? Could such technology have helped in cleanup efforts from the BP oil leak in the Gulf of Mexico?

The creation and adoption of billion-node constructs and planetary IT specifically need to be speeded up. The lack of processing power should no longer be an innovation inhibitor - and some problems require everyone's help. Let's get some global economies of scale in here.

EMC is doing its bit by stimulating discussions and idea submission of the practical uses of such constructs or their underlying technologies - what we call Innovation Showcase. It doesn't matter how wild they are or where they come from! The whole IT industry is in a frenzy of innovation where the vision of the Cloud is being translated into innovative and practical solutions across industries and borders - Global Innovation!

Things are starting to get seriously interesting. It is well worth watching what the pioneers, the early-adopters at this planetary scale, are doing.

It provides insight into how they are tackling the processing and handling of petabytes of information in near real time, the distribution of visualization of such volumes of data to extract information, and how their actions are affecting the wider Cloud technologies markets and vendor maneuvering.

Perhaps when space agencies start to cooperate at the level of their IT processing capabilities, then we may see the first billion-node machine emerge! The race is officially on!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.