A Resurgent SPARC platform for Enterprise Cloud Workloads (Part 2) - SPARC T5
Desktop of the Future? What Desktop? - Shake up of the Digital Workspace!

The Shape of Things to Come!

A lot of what I do involves talking with thought leaders from organizations keen to transform how they do business. In some cases, they espouse thoughts moving along general industry lines or marketing. However, in some cases, there is real innovative thought taking place. I believe firmly innovation starts with questioning the current status quo.

We are bombarded by Intel x86 as the ultimate in commodity processor offering everything one could possibly imagine on the one side, and public cloud on the other hand as the doom of in-house IT centers. It is incumbent on all in this industry to think beyond even the cloud as we know it today.

Questioning "Datacenter Wisdom"

This blog entry in entitled the Shape of Things to come with a clear series of ideas in mind:

  • System-on-a-Chip (SOC) are getting very powerful indeed. At what point are these so powerful that they represent the same order of magnitude as an entire hyperscale datacenter from Google or Amazon with a million machines inside?

  • Why does in-house IT have to move out to the cloud? Why could hyperscale clouds not be built up from capacity that organizations are already putting in place? This would be akin to the electricity grid as the transport for capacity created from multiple providers. Borrowing capacity could be done in-industry or across-industries.

  • Why is there disaggregation of all components at a physical datacenter level (CPU, RAM, storage, networking etc) rather than having assembly lines with appliances/constructs hyper-efficient at a particular task within the enterprise portfolios of services and applications?

  • Why are servers still in the same form factor of compute, memory, networking and power supply? Indeed why are racks still square and datacenter space management almost a 2-dimensional activity? When we have too many people living in a limited space we tend to build upwards, with lifts and stairs to transport people. Why not the same for the datacenter?

I 'm not the only one asking these questions. Indeed, in the industry the next wave of physical manifestation of new concepts is taking place albeit slowly. I wanted to share some industry insight as examples to whet the appetite.

  • At Cornell University a great whitepaper on cyclindrical racks using 60GHz wireless transceivers for interconnects within the rack show a massively efficient model for ultrascale computing.
  • RoundWirelessServerack

  • Potentially the server container would be based on a wheel with servers as cake slice wedges plugged into the central tube core. Wheels would be stacked vertically. Although they suggest wireless connectivity, there is no reason why the central core of the tube could not carry power, networking and indeed coolant. Indeed the entire tube could be made to move upwards and downwards - think tubes kept in fridge like housings (like in the film Minority Report!)
  • MinorityReport

  • One client suggested that CPUs should be placed into ultracooled trays that can use the material of the racks as conductors and transport to other trays full of RAM. We do this with hard disks using enclosures. Indeed Intel does 3D chip stacking already!
    • Taking the Intel 22nm Xeons with 10 cores or indeed Oracle's own SPARC T5 at 28nm and 16 cores as building blocks
    • A 2U CPU tray would allow say 200 such processor packages. This is an enormous capability! For the SPARC T5 this would be 3200 cores, 25600 threads and 11Thz of aggregate power!
    • Effectively, you could provide capacity on the side to Google!
    • A RAM tray would basically allow you to provide 20TB+ depending on how it is implemented (based on current PCIe based SSD cards).
  • Fit-for-purpose components for particular workloads as general assembly lines within an organization would fit in well with the mass-scale concepts that the industrial and indeed digital revolutions promoted.
    • If we know that we will be persisting structured data within some form of relational database, then why not use the best construct for that. Oracle's Engineered Systems paved the way forward for this construct.
    • Others are following with their own engineered stacks.
    • The tuning of all components and the software to do a specific task that will be used for years to come is the key point!

So the technical components in this radical shake up of the datacenter are materializing. We haven't even started to talk about some of te work happening in material science providing unparalleled changes in CPUs (up to 300GHz at room temperature) or even non-volatile RAM totally replacing spinning disk and possibly SSD and DRAM.


Why is this important for the CIO, CTO & CFO?

Customers typically ask whether they should move everything out to cloud providers such as Google/Amazon or private cloud hosters such as CSC/ATOS/T-Systems. Well looking at the nexus of technological change that is almost upon us, I would say that at some level it might make sense to evaluate the mix of on-premise and off-premise resource.

The Cloud is effectively a delivery model - some applications such as email clearly can be in the public cloud - bearing in mind privacy issues. However the capabilities needed for an organization to thrive as expressed in Enterprise Architecture in order to exploit market forces can be expressed in other ways.

  • Server virtualization relies on workloads not taking all the resources of a physical server. You should be questioning why the software, the most expensive components, is not being used to its maximum? Solving server acquisition costs does not reduce costs for you in a meaningful way.

  • Entertain the idea that with acceleration at every level of the stack, information requests may be serviced in near-realtime! The business should be asking what it would do with that capability? What would you do differently?

  • Datacenter infrastructure may change radically. It may well be that the entire datacenter is replaced by a tube stacked vertically that can do the job of the current football field sized datacenter. How can you exploit assembly line strategies that will already start to radically reduce the physical datacenter estate? Oracle's Engineered Systems are one approach for this for certain workloads, replacing huge swathes of racks, storage arrays and network switches of equipment.

  • Verify if notions of desktops are still valid. If everything is accessible with web based technologies, including interactive applications such as Microsoft Office, then why not ensure that virtual desktops are proactively made obsolete, and simply provide viewing/input devices to those interactive web pages.

  • Middleware may well represent a vastly unexplored ecosystem for reducing physical datacenter footprints and drastically reducing costs.
    • Networking at 100+Gbps already enables bringing your applications/web powered effective desktops with interaction to the users' viewing devices wherever they are.
    • Use intra-application constructs to insulate from the technical capability below. Java applications have this feature built-in, being cross platform by nature. This is a more relevant level of virtualization than the entire physical server.

  • Security should be enabled at all layers, and not rely on some magic from switch vendors in the form of firewalls. It should be in the middleware platforms to support application encapsulation techniques, as well as within pools of data persistence (databases, filesytems etc).
Enterprise architecture is fueling a new examination of how business defines the IT capabilities it needs to thrive and power growth. This architecture is showing the greater reliance on data integration technologies, speed to market and indeed the need to persist greater volumes of data for longer periods of time.

It may well be incumbent on the CIO/CTO/CFO to pave the way for this brave new world! They need to be already ensuring that people understand what is impossible now, technically or financially, will sort itself out. The business needs to be challenged on what it would do in a world without frontiers or computational/storage limitations?

If millions of users can be serviced per square round meter of datacenter space using a cylindrical server tube wedge/slice - why not do it? This is not the time for fanatics within the datacenter that are railroading discussions to what they are currently using - or to provide the universal answer "server virtualization from VMware is the answer, and what is the question?".

Brave thinking is required. Be prepared to know what to do when the power is in you hands. The competitive challenges of our time require drastic changes. Witness what is happening in the financial services world with traders being replaced by automated programs. This requires serious resources. Changes in technology will allow this to be performed effortlessly with the entire stock market data kept in memory, and a billion risk simulations run per second!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.