Terabyte…well Everything Tera Actually! (Part 1- the Servers)
Tera…well Everything Actually! (Part 3 – the Storage)

Tera…well Everything Actually! (Part 2 - the Network – slower than the speed of light!)

Imported from http://consultingblogs.emc.com/ published September 18 2010

In my last blog I described the fantastic things happening in the server world underpinning and driving consolidation to levels previously unseen. This technological push to ensure density to levels supporting millions of concurrent users will happen at every level of infrastructure possible. In this post, I wanted to highlight a couple of areas that touch on networking innovation related to virtualization.

What’s Been Happening with Light?

This is difficult to cover, so I will look at milestone breakthroughs that are indicative of the networking cusp we are on in relation to virtualization and the Cloud.

So back in 2003 the following was predicted for Ethernet evolution:

· 10 Megabit Ethernet 1990 (Invented)

· 100 Megabit Ethernet 1995

· 1 Gigabit Ethernet 1998

· 10 Gigabit Ethernet 2002

· 100 Gigabit Ethernet 2006 (Predicted)

· 1 Terabit Ethernet 2008 (Predicted)

· 10 Terabit Ethernet 2010 (Predicted)

Fast forward to 2007, and we start to see some news on extreme networking:

· 11.38 Pbit/s (Petabit per second) aggregate capacity in the LucasFilm’s datacenter

· Hollywood with its focus on speed and revenue continues to push the ‘need for speed’ outer envelope.

· Intel develops 40Gbps photonics in silicon potentially providing massive networking capability at affordable prices for all!

Fast forward closer to our own time (2009-2010):

· Bell Labs (Sep 2009 - this time last year) with its 100 Petabit/s proof of concept potentially allowing 400 DVDs per second over 7,000 kilometers

· Intel getting to 50Gbps through combining wavelengths

o Now in 2010, these two inventions are coming together with silicon photonics. We‘ve been talking for many years about research to siliconize“ photonics, and until now all these breakthroughs have been at the device or component level. What we‘ve announced today is for the first time we have an integrated silicon photonics transmitter using hybrid silicon lasers that‘s capable of sending data at 50 gigabits per second (Gbps) across an optical fiber to an integrated silicon photonics receiver chip which converts the optical data back into electrical.

· Intel goes further, stating that:

o CTO Justin Rattner identified the problem facing everyone today: copper wire used in network cabling is pushing its maximum speed limit when it goes beyond 10 gigabits per second (Gbps) and the faster the data is transmitted, the shorter of a distance it travels. One gigabit Ethernet has a max distance over cable of about 100 meters, while 10GbE can travel a maximum of 55 meters.

· Well converting light back to electricity seems to be slowing our ability to create fully light-interconnected computing systems (instead of all those cables in a server!). Well, there is progress on that front as well:

o Researches have been able to harness ‘slow light’ using non-exotic materials with a little help from nanotechnology (‘room temperature and no fancy cooling’). This paves the way for potential light computers that have no cables inside! Using light end-to-end!

o ‘Laser-silicon chips’ may well be the future of the CPU allowing chip-to-chip transfers from within a server or even servers in another geography!

The writing seems to be on the wall for copper where networking is concerned. However, there seems to be issues concerning the price of optics. Well, the industry is also moving forward on that front. The main issues are actually to do with handling of this rather fragile glass based fiber optic material. So along comes some assistance with ‘Plastic Optical Fiber’. This is not a panacea for everything, but starts to make those ‘last mile’ connections easier to do thereby bringing optical technologies into the home and businesses.

Indeed European research is also being focused strongly on bringing this promising technology to mainstream-use to overhaul aging infrastructures. This in turn makes the ability to pump huge volumes of light-based traffic of paramount importance.

What does this all Mean for Virtualization and the Cloud?

We are in 2010 and most firms are just beginning to come to grips with 10GbE. The predictions above perhaps underestimated the massive upheavals in bandwidth needs supporting the Internet, globalization, virtualization technologies and the Cloud. As I indicated in my last blog, standard x86 servers are approaching 64+cores and 2TB RAM in a single machine. This will mandate a corresponding increase in bandwidth available per server.

This bandwidth needs to be delivered in a protocol independent and compact manner. So fifty cables per server is not tenable in Cloud environments. Nor is the need to have separate cables for storage, backup, management and separate LAN segements per server. This will simply not scale.

Copper based networking is the interconnect of yesterday. This needs to be replaced with fiber optics. Virtualization really helps in this regard. Just as the need for many physical servers has been reduced, the same virtualization engines running their hypervisors need to have fewer cables to carry traffic to and from the virtual machines. Just two 10GbE cables per server will doWink.

How is this need playing out? Well we see the push to FCoE (Fibre Channel over Ethernet), facilitating sending storage oriented traffic over the same physical cable that TCP/IP and other protocols are being sent over. The pipe is getting big enough to send LAN and storage traffic over the same cable. This is the shape of things to come. If fiber optic cabling is used, an organization is going to be able to do a lot more with light than in the past at acceptable cost levels. Those two cables start to become very important at this point.

The cabling problem is not just at the server end. Networking traffic needs to be shunted around. This brings increased pressure on edge and core switches. Well 100GbE will allow a further reduction in the number of cables in the Cloud datacenters. It is also faster, allowing remote datacenters to be connected as if they were collocated. Juniper, Brocade and Cisco are jumping on the 100GbE bandwagon. The need is there, and they are responding.

What we are seeing is the evolving ability to manipulate the fastest transmission media we know of, namely light, to support not just interconnections of networks, but also the ability to burrow into what we call the computer today to exponentially increase capabilities.

Why is this important for the virtualization industry and the CIO?

As virtualization abilities increase supporting ever more processor cores and RAM, the network needed to catch up. It is doing that and there is now the real need in certain networks to think about Petabit scalable switches and routers.

This increasing speed is also changing how we think about that ‘other datacenter over there on the other side of the country’. Where light is concerned, nothing on this planet is really far away! After all, it takes just 8 minutes for light from the Sun to reach Earth. That is over 94 million miles. The Earth is only 24,000 miles around its widest point. We are talking about instant communication to any point on this planet.

So cable convergence, multiple protocols over the same physical media, massive bandwidth availability and ever more capable virtualization engines coupled with massive deployments in Clouds are the order of the day. That is happening now, and it pays to take these trends into account, informing current architectural decisions being made by IT staff in organizations. CIOs should be on the lookout for not just consolidation and cost savings, but massive consolidation with cost incurred to jump to that desired state. Don't settle for dated thining on IT. Insist on consolidation!

So what can be done now? What should be encouraged in datacenters in all organizations regardless of size in my view? Well looking at what the ICT market has imminently in its pockets:

- Move to achieving deep Server Virtualization of all workloads on low cost x86 platforms – a virtualization-first policy on x86 platforms will help here!

o As EMC’s VP Chuck Hollis says:

§ ‘Adopt a “virtualization first” policy. Don’t invest a single dollar in the physical legacy unless all alternatives have been exhausted. Make sure that every piece of technology that lands on the data center floor moves you closer to the goal, and not away from it. No U-turns.’

- Move to 10GbE in the Datacenter end-2-end or at least ensure that virtualization servers are connected this way. (I can highly recommend Brad Hedlund’s excellent blog on all things networking – VMware 10GE QoS Design Deep Dive with Cisco UCS, Nexus describing what Cisco, EMC and VMware are doing with 10GbE technologies)

- Start to move to fewer cables – virtualization will support this, as amply shown in the VBlock using 10gbE throughout with FCoE for storage. This in turn will automatically fuel the move to fewer datacenters and less power/space being needed. Those are side-effects of virtualization and a BIG win for CIOs everywhere.

- Move to fiber optics. That is going to be the only media that will support the Terabit and Petabit networking revolutions. In the datacenter environment there are costs associated with this move. However, needing less of everything should facilitate and justify the move to optics in at least the virtualization infrastructure.

- Simplify the networks. The traditional school of thought regarding separate cables per network needs to be replaced with the idea of trunk cables carrying all traffic over fewer cables. Software policies, virtual software-based firewalls (e.g. VMware vShields) and governance/compliance enhanced networking policies will ensure that networking traffic is going where it should.

- For highly secure environments, virtualization can also add some special abilities allowing such environments to be hosted in a private cloud. However, this is a topic for another time and should not be the first environment dealt with - still just some thoughts on that topic:

o Regarding firewall security in a virtual environment, read the VMworld 2010 announcement of vShield Edge/App/Endpoint products, extending security to offloading of anti-virus scanning

o Bear in mind also that Intel recently announced the acquisition of McAfee. How long before malware detection and virus scanning are embedded in Silicon?

Get used to using the minimum number of cables right now. Bandwidth is not going to be an issue going forward. Some planning can be done right now!

CIOs should be driving this agenda for IT! The business dynamics are accelerating, and IT needs to be able to get things done rapidly (hours, not days and weeks).

This is going to pay off in non-IT costs as well. Project managers can bring in business change more rapidly – no more wrangling with IT about what they can’t do and what is impossible. It’s all possible until proven otherwise.

Developers can start to evolve their ideas as soon as they dream them up. Environments can be erected and torn down as and when needed to foster innovation. Mergers and acquisitions of IT infrastructure should be possible to do at the press of a button. New platform versions such as databases and messaging/collaboration environments should be possible to rollout in weeks not months/years!

The technology is out there right now. VMworld 2010 was a fantastic showcase for what organizations around the world are able to do with VMware virtualization products. That should be proof enough that all organizations should be making the 100% push to virtualization. The technology is mature and only getting better. All enterprise software stacks are supported. In the Private Cloud, security best-practices can also be applied and ensure that organizational data remains private. This is being extended to Public Clouds also.

CIOs as agents of business change are ideally placed to set the vision for the corporate IT agenda and ensure that all pipeline projects and future initiatives capitalize on the rock-solid foundation of virtualization and the Private Cloud. This is the only way to achieve deep business sustainable cost effectiveness and still ensure that future business innovation is not jeopardized.


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.


Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.