Paradigm Shift

Datacenter Rack-scale Capability Architecture (RCA)

For the last couple of years there has been a resurgent theme cropping up regarding disaggregation of components within a rack to support hyperscale datacenters. Back in 2013 Facebook, as founder of the Open Compute Foundation, and Intel announced their collaboration on future data center rack technologies.

This architecture deals basically with the type of business that Facebook itself is running and as such is very component focused such that compute, storage and network components are disaggregated across trays, trays being interconnected with a silicon photonic internal network fabric.

This has the advantage for hyperscale datacenters of modularity allowing components such as CPU to be swapped out individually as opposed to the entire server construct. Intel, presenting at Interop 2013, had an excellent presentation on the architecture outlining various advantages. This architecture is indeed being used by Baidu, Alibaba Group, Tencent and China Telecom (according to Intel).

This in itself is not earth shattering, but seems to lack the added "magic sauce". As it stands this is simply a re-jigging of the arrangement of the form factor but in itself does not do anything to really enhance workload density outside of the consolidation of large numbers of components in the physical rack footprint (i.e. more core, RAM, network bandwidth).

Principally it is aimed at reducing cable, switch clutter, associated power requirements and upgrade modularity, essentially this increases the compute surface per rack. These are fine advantages when dealing with hyperscale datacenters as they represent considerable capital expenditure as outlined in the Moor Insights & Strategy paper on the subject.

Tying into the themes in my previous blog regarding the "Future of the Datacenter" there is a densifying effect taking place affecting the current datacenter network architecture as aptly shown in the Moor study:

Examining this architecture, the following points stand out:

  • Rack level architecture is essential in creating economies of scale for hyper-scale and private enterprise datacenters
  • East-West traffic is coming to front-and-center whilst most datacenters are still continuing in North-South network investment with monolithic switch topologies
  • Simply increasing the number of cores and RAM within a rack does not itself increase the workload density (load/unit)
  • Workload consolidation is more complex than this architecture indicates utilizing multiples components at different times under different loading
  • Many approaches are already available using an aggregation architecture (HP Moonshot, Calxeda ARM Architecture, even SoCs)

There is a lot of added value to be derived for an enterprise datacenter using some of these "integrated-disaggregated" concepts, but competing with and surviving in spite of hyperscale datacenters requires additional innovative approaches to be taken by the enterprise.

Enterprises that have taken on board the "rack as a computer" paradigm have annealed this with capability architecture to drive density increases up to and exceeding 10x over simple consolidation within a more capable physical rack:

  • General purpose usage can be well serviced with integrated/(hyper)converged architectures (e.g. Oracle's Private Cloud Appliance, VCE Vblock, Tintri, Nutanix)
  • Big Data architectures use a similar architecture but have the magic sauce embedded in the way that the Hadoop cluster software itself works
  • Oracle's Engineered Systems further up the ante in that they start to add magic sauce to the hardware mix and the software smarts – hence engineered rather than simply integrated. Other examples are available from Teradata, Microsoft and its appliance partners)
  • In particular, the entire rack needs to be thought of as a workload capability server:
    • If database capability is required then everything in that rack should be geared to that workload
    • In-Platform (in the database engine itself) capability used above general purpose virtualization to drive hyper-tenancy
    • Private networking fabric (Infiniband in the case of Oracle Exadata and most high-end appliances)
    • Storage should be modular and intelligent, offloading not just storage block I/O but also being able to deal with part of the SQL Database Workload itself whilst providing the usual complement of thin/sparse-provisioning, deduplication and compression
    • The whole of database workload consolidation is many times the sum of parts in the rack
  • The datacenter becomes a grouping of these hyper-dense intelligent capability rack-scale servers
    • Intelligent provisioning is used to "throw" the workload type onto the best place for doing it at scale, lowest overall cost and still deliver world-class performance and security
    • Integrate into the overall information management architecture of the enterprise
    • Ensure that as new paradigms related to Big Data Analytics and the tsunami of information expected from Internet-of-Things that they can be delivered in the rack scale computer form but with additional "smarts" to further increase value being delivered as well as provide agility to the business.

The enterprise Datacenter can deliver value beyond a hyper-scale datacenter through thinking about continuous consolidation in line with the business not just IT needs that need to be delivered. Such platform and rack-scale capability architecture (RCA) has been proven to provide massive agility to organizations and indeed prepares them for new technologies such that they can behave like "start-ups" with a fast-low-cost-to-fail mentality to power iterative innovation cycles.

Opportunities for the CIO

The CIO and senior team have a concrete opportunity here to steal a march on the Public Cloud vendors by providing hyper-efficient capability architectures for their business in re-thinking the datacenter rack through RCA paradigms.

Not only will this massively reduce footprint in the existing premises and costs, but focuses IT on how best to serve the business through augmentation with hybrid Cloud scenarios.

The industry has more or less spoken about the need for hybrid Cloud scenarios where private on-premise cloud is augmented with public cloud capabilities. Further today's announcement with regards to the EU making "Safe Harbour" data treaty effectively invalid should put organizational IT on point about how to rapidly deal with these changes.

Industry thinking indicates that enterprise private datacenters will shrink, and the CIO team can already ensure they are "thinking" that way and making concrete steps to realize compact ultra-dense datacenters.

A hyper-scale datacenter can't really move this quickly or be that agile as their operating scale inhibits this nimble thinking that should be the hallmark of the CIO of the 2020s.

In the 2020s perhaps nano- and pico-datacenters may be of more interest to enterprises as way of competing for business budgetary investment as post-silicon graphene compute substrates running at 400GHz room become the new norm!

The Future of the Datacenter

The IT industry is characterized by loops of iterative innovation – hence such adages as "back to the mainframe again". However, recent reports regarding the potential future design of datacenters from Emerson Network PowerData Center 2025: Exploring the Possibilities – really outline the state of the industry in relation to drivers from cloud technologies on the inside of the business as well as externally as public cloud providers.

The report highlights differing opinions naturally, however, there are some rather radical thought paradigms in place that may well entail a whole new way of internal IT thinking on behalf of the CIO and infrastructure management:

  • 43% likelihood of Private Power Generation in Hyperscale Data Centers in 2025
  • Average power density of data center expected to reach 52kW/rack 
  • 67% of survey participants believe at least 60% of computing will be cloud-based by 2025
  • Growth of cloud will ultimately result in the decline of the 1-2MW enterprise data center 
  • 58% believe that data centers will be 50% smaller than current facilities 
  • 41% see a combination of air and liquid being used for thermal management 
  • Data centers will be highly automated – all devices auto-identify and operated without/minimum human resources with utilization rates expected of 60-80% 
  • 50% expect server lifecycles to stay in 3-6year range (as today) with Asia Pacific projecting 7+ years


Figure 1: Overall Response from Participants

In short, some very serious implications for anyone that is running a private datacenter today. The relatively conservative pace of IT innovation within the datacenter is set to change based on new computational and storage needs with their implicit effect on networking and physical infrastructure.

In the same report, Andy Lawrence, vice president at 451 Research summarizes nicely by saying:

"The data center of 2025 certainly won't be one data center. The analogy I like to use is to transport. On the road, we see sports cars and family cars; we see buses and we see trucks. They have different kinds of engines, different types of seating and different characteristics in terms of energy consumptions and reliability. We are going to see something similar to that in the data center world. In fact that is already happening, and I expect it to continue."

Datacenter management needs to consider how such a metaphor can be instantiated within their own "bricks and mortar stores" to be enable a bridge to the future right-sizing of datacenters taking place.

Opportunities for the CIO


Even within the confines of this report and it predictions, there are areas of innovation to still explore. The need to have smaller more ultra-dense datacenters provides the following advantages for enterprises to utilize fast nimble techniques that are simply not economical for a large public cloud provider.

  • Service quality excellence
    In a world of "public" public cloud downtimes announced within a fraction of a second by those relying on these constructs through social networks, service quality excellence and superior uptime remain the hall mark by which business will measure the value of internal IT.


  • Vertical capability integration
    With much of the heavy network and storage traffic being East-West in a classic datacenter, vertical capability-focused integration (as opposed to simple infrastructure integration) will provide ultra-dense configurations handling superior numbers of concurrent workloads whilst maintaining service quality excellence. Typical workloads such as database would be able to take advantage of much faster networking (typically Infiniband 40-56Gbps vs 10GbE in North-South directions) with local modular storage and usually results in massive reductions of footprint, licensing and facility needs. Interlinkage with other capabilities (such as middleware or analytics) can be facilitated over the same high-speed fabric rather than inefficiently "hop-scotching" over the entire classic datacenter.


  • "Disposable" Private Cloud Datacenter Containers
    Ultra-dense capability computing will again be a workable idea using the state of the art in air/liquid cooling. These can basically strip out the entire traditional costs of building out facilities and are simply more flexible for location. Further, they can utilize low factor costs in geographies where a traditional datacenter can't be built without extreme cost being incurred. Full automation of the container allows a notion of "decentralized centralized IT" to still be put in place. The idea of a "disposable" container – literally swapping in/out as newer technology merits has also been bandied about as a way of extracting micro-location financial yields through asset sweating.


  • Urban Datacenter Grids & Locality
    Datacenter containers allow enterprises to physically locate "their" private clouds close to urban business concentrations allowing superior service quality, low latency and ability to differentiate service offerings at software and hardware level. Trends in creating datacenter container parks may well arise that allow a further sharing of common estate costs.


  • Intra-Inter Rack Form Factor Design
    Ultra-density imposes challenges on form factors of traditional rack-and-stack configurations. Many initiatives are underway to further optimize within the rack itself using rack-level disaggregation combined with a rack-level component fabric. Intel's Rack Scale Architecture (RSA) is currently gaining most media attention, but is by no means the first. Indeed Oracle Engineered Systems are architected in a similar manner as are appliances from Teradata and others.


The CIO of today is constantly bombarded with messages from the business and the press about moving all to the "Cloud". However, the potential for enterprise IT innovation remains essentially quite strong.

Complete and realistic business case calculation of public cloud providers, including opex costs for storage, compute and importantly network can be used to justify and finance an innovative cost-effective/cost-neutral enterprise datacenter strategy with much higher standards of security and service quality. The public cloud is providing a generational jump from IaaS in the forms of PaaS and functional SaaS capabilities. However, they do not cater for all the security and uptime quality needs of all enterprises.

To differentiate and capitalize on existing talent and resources, the CIO may need to buck the trend in keeping a fleet of smaller enterprise datacenter containers that are used to extend the "commodity" functionality in the Public Cloud PaaS and SaaS to in/on premise IT as a means of innovating beyond the cycle of the Public Cloud industry.

After all – if everyone does the same thing – then we should expect a similar level of result albeit more efficient than today!

Expressions of Big Data: Big Data Infrastructure BUT Small is Beautiful !!

In an effort to streamline costs, increase efficiency, and basically getting IT to focus on delivering real business value, a tremendous mind-shift is taking place. The battleground is the "build-your-own" approach versus the pre-built, pre-tested, pre-integrated built-for-purpose appliances using commodity hardware.

Big Data infrastructure is the next wave of advanced information infrastructures focused on to deliver competitive advantage and insight into pattern based behaviours.

Virtually every hardware vendor in the market has an offering. From Greenplum/EMC's (Pivotal) PivotalHD Hadoop Distribution with VMware/Isilon underpinning infrastructure, HP’s Haven and AppSystem ecosystem, IBM PureData, Teradata's Portfolio for Hadoop and indeed Oracle’s Big Data Appliance / Exalytics.

Many share the Cloudera (with Yahoo Hadoop founder Doug Cutting) distribution or directly to the roots in Apache Hadoop. Others are implementing with MapR or Hortonworks (the spinoff of Yahoo’s Hadoop team) distributions that are highly performant.

Clearly credibility is important to customers, either directly as in using Apache Hadoop or inherited through distributions such as Cloudera. Cluster management tools are critical differentiators when examining operations at scale - consequently this typically incurs licensing.

Significant players such as Facebook and Yahoo are developing Hadoop further and feeding back into the core Apache Hadoop. This allows anyone using this distribution to take advantage.

Over the coming blogs I will take a quick peek at these Big Data Infrastructures, their approaches, key differentiators and integration with the larger information management architecture. The focus will be density, speed and smallest form factors for infrastructure.

Questioning "Accepted Wisdom"

Whilst it is nice to know that we can have clusters of thousands of nodes performing some mind-boggling Big Data processing, it is almost MORE interesting to know how this can be performed on much smaller infrastructures, at the same/better speed and simpler operational management.

With that in mind, we should be asking ourselves if Big Data can be processed in other ways to take advantage of some very important technology drivers:

  • Multi-core/Multi-GHz/Hyper-threaded processors
  • Multi-terabyte (TB) memory chips
  • Non-volatile RAM (PCIe SSD mainly today, but possible Memristor or  even carbon nanotube based NanoRAM (NRAM)) as replacement for spinnng disk storage and volatile RAM
  • Terabit Interconnects for system components or to external resources
  • In-Memory/In-Database Big Data capabilities to span information silos vs. recreating RDBMS systems again.

With systems today such as the recently released Oracle SPARCT T5-8 alread having 128 cores and 1024 threads at 3.6GHz in a 8RU form factor - the compute side of the Big Data equation seems to be shaping up nicely - 16 cores/RU or 128 threads/RU.

Still too small as Hadoop clusters sport much greater processing power at the cost of requiring more server nodes of course.  

With Infiniband running at 40Gbps (QDR) and faster, component interconnects are also shaping up nicely. Many vendors are now using Infiniband to really get that performance up compared to Ethernet or even fibre channel for storage elements. Indeed some are literally skipping SANs/NASs and just moving to server based storage.

Many database vendors are actively using adapters to send Big Data jobs to the infrastructure and pull results back into the database. It will be a matter of time before Big Data is sucked into the RDBMS itself just as Java and Analytics has been.

However the memory technology vector is the one that is absolutely critical. The promise of non-volatile memory with performance outstripping the fastest RAM chips out there, the very shape of infrastructure for Big Data is radically different!

Not only is small beautiful - but it is essential to continuous consolidation paradigms allowing much greater densification of data and processing thereof!

Why is this important for the CIO, CTO & CFO?

Big Data technologies are certainly revolutionizing the way decisions are being informed and providing lighting fast insight into very dynamic landscapes.

Executive management should be aware that these technologies rely on scale-out paradigms. This would effectively reverse gains made through virtualization and workload optimized systems to reduce the datacenter estate footprint!

CIO/CFOs should be looking at investing in technology infrastructure minimizing the IT footprint, and yet still delivering revenue generating capabilities derived through innovative IT. In some cases this will be off-premise based clouds; in others competitive advantage will be derived from on-premise resources that are tightly integrated into the central information systems of the organization.

Action-Ideas such as:

  • Funding the change for smaller/denser Big Data Infrastructure will ensure server (physical or virtual) sprawl is avoided
  • Continuous consolidation paradigm funding by structuring business cases for faster ROI
  • Funding efficiency of operations. If Big Data can be done within a larger server with a single OS image and in-memory vs. 1000 OS images, this will be the option to make operations simpler and efficient. There maybe a cost premium at the server and integration layers.
  • Advanced integration of the Big Data infrastructure and information management architectures to allow seamless capability across information silos (structured/unstructured)
  • Cloud capabilities for scaling preferably within the box/system using what Gartner calls fabric-based infrastructures should be the norm rather than building your own environment.

Continuous workload consolidation needs to take center-front stage again - think thousands of Big Data workloads per server not just thousands of servers to run a single workload!  Think In-Memory for workloads rather than in simple spinning disk terms. Think commodity hardware/OS is not the only competitive advantage - only low hanging fruit!

We'll take a closer look at Big Data infrastructure in the coming blogs with a view to how Cloud pertains and still ensure deep efficiency at an information management infrastructure perspective using relevant KPIs.


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

In Memory Computing (IMC) Cloud - so what about the Memory?

There's been a lot of talk in 2013 about in-memory computing (IMC), with Gartner indicating strategic significance in 2012. Very little has been said about the memory needed for IMC!

IMC is claimed to be "new", "radical", "never before done in the industry" etc. Much of this has been from SAP's HANA marketing amongst others. The discussion is IT industry relevant and all workloads delivered locally or through IMC-enabled Clouds!

Larry Page, after proposing holding the Internet in memory in 2000 at the Intel Developer Forum,  moved forward with this idea of holding the Internet in memory -  resulting in Google. He had only 2,400 computers in the Google datacenter then!

The industry has responded in like kind - by stating large amounts of memory have been available in platforms for nigh on a decade. Indeed, the latest Oracle Exadata X3-8 Engineered System has 4TB of RAM and a 22TB of PCIe Flash - non-volatile RAM.


So IMC is not new in the sense SAP and others would have you believe. It is a natural evolution of economies of scale bringing price/GB down accompanied by technological speed & capacity innovations.

A purist approach based on DRAM (nanosecond access) alone has a vast cost difference from NAND Flash (microseconds - 1,000x slower) and spinning disk (milliseconds) technolgies today - whilst being volatile - data gone on power cycling! Economically speaking - a hybrid approach has to be taken as a road to the IMC-Cloud!

Amongst the characteristics facilitating wholesale transformation to full IMC (hardware, software, application architectures) are:

  1. Performance - nanosecond to microseconds as DRAM/Flash currently
  2. Capacity -Terabytes initially and then Petabytes as Flash/Disk currently
  3. Volatility - non-volatile on power cycling much in the same way as Flash/Disk today
  4. Locality - as close as possible to the CPU, but needs to be manageable at cloud scale!

Individually, each of these characteristics has been achieved. Combined, they are technically challenging. Many promising technologies are evolving aiming to solve this quandry and change the face of computing as we know it forever. Basically Massive, Low-power consuming non-volatile RAM!  

Advances are being made in all areas:

  • HMC - Hybrid Memory Cube using stackable DRAM chips resulting in 320GB/s throughput (vs. DDR3 maxing out at 24GB/S) and 90% less space with 70% less energy. Still volatile though!
  • Phase-change memory (PCM/PRAM) producing non-volatile RAM. Micron already has this shipping in 1GB chips (in 2012). This does not require erase-before-rewriting cycles like Flash so potentially much faster. Current speeds are 400MB/s. 
  • HDIMM - Hybrid DIMMs - combine high-speed DRAM with non-volatile NAND storage (Flash). Micron (with DDR4) and Viking Technology (DDR3 NVDIMM) have these technologies with latencies of 25 nanoseconds.
  • NRAM - Carbon nanotube based non-volatile RAM. Nantero and Belgium's IMEC are working jointly to create this alternative to DRAM and scaling below 18nm sizes. Stackable like HMC and all non-volatile.
  • Graphene (single layer of carbon atoms) based non-volatile RAM such as efforts in 2013 in Lausanne, CH.

Which of these will be driving future architectures remains to be seen in the sensitive price-capacity-performance markets. Post-silicon era options based on carbon/graphene nanotube technology would of course power the next wave of compute as well as memory structures.

Questioning "Accepted Wisdom"

So In-Memory Computing is coming - has been for the last decade or so. So, are datacenter infrastructure and application architectures keeping pace or at least preparing for this "age of abundance" where non-volatile RAM is concerned?

In discussions with folks from IT and industry colleagues there is a clear focus on procurement at low price points with IT simply saying "everything is commodity"! This is like saying two cars of identical model/make/engine with different chip management software are the same - one clearly perfoms better than the other! The software magic in these hardware and software stacks makes them anything but commodity.

Many IT shops still think a centralized array of storage is the only way to go. They basically change media within the array to change storage characteristics - 7.2/10/15K RPM spinning disks to SSD drives. That is where their thinking essentially stops!

This short-term thinking will effectively result in the next wave of infrastructure and application sprawl OR revolution through IMC-Cloud enabled vectors turning IT on its colective head.

This would simply be too slow as a model for IMC Clouds. There are some clear trends emerging indicating how CIO/CTO and CFOs can prepare for IMC based datacenters of the future to drastically increase capability while changing the procurement equations:

  • Modular storage containers located close to processor & RAM driving the move away from islands of central/massive/SAN infrastructures.
  • Internetworking needs to be way faster to leverage IMC capabilities. Think 2013 for 40Gbps GbE/Infiniband now. Think 2016 for PCIe4 at 512Gb/s (x16 lane duplex). That speed is needed at least at the intersection points of compute/RAM/Storage!
  • Engineered (hardware and software optimized) for entire platforms. Simply not worth focusing all IT effort on individual best-of-breed components when "the whole needs to be greater than the sum of parts!".
  • Backup architectures need to keep up. Tape remains a cost-effective media for inactive/data-backup data sets, particularly when in open source Linear Tape File System (LTFS) format. A great blog on that from Oracle-StorageTek's Rick Ramsay.
  • Application architectures need to move away from bottleneck-resolution thinking! Most developers don't know what to do with Tbytes of RAM! Applications need to be massively parallel patterns where possible. Developers need to deliver data in real-time!
2013-2016 will see the strong rise of non-volatile memory technology and architectures. CIO/CTOs should be thinking about how they will leverage these capabilities. IT philosophers need to discuss and map out implications before handing over to Enterprise Architectes to enact.

Why is this important for the CIO, CTO & CFO?

Simple server consolidation has had its day! Most IT shops have used server virtualization in one form or another. The early fast returns are almost exhausted! Continuous workload consolidation needs to take center-front stage again- think thousands of workloads per server not 20-40!

Private IMC-Clouds provide an ability for CIOs to keep in-house IT relevant to the business.

CTOs should be thinking about how IMC-Clouds can power the next wave of innovative applications-services-products in an increasingly interconnected always-on manner. Scaling, performance, resiliency to failure should be designed into application platforms - NOT applications themselves. Fast moving application development can then proceed without recreating these features in every app.

For the CFO, IMC-enabled Private Clouds represent dramatic lowering of all costs associated with IT to the business. Consolidating massive chunks of datacenter infrastructure, decommission datacenters, simplify demands on CFO resources for more performance/capacity will allow CFOs to free trapped financial value that can be used directly by the business. Tech-refresh cycles may need to be shortened to bring this vision to fruition earlier!

IMC-enabled Clouds, combined with Intelligent Storage will allow fundamental transformations to take place at paces exceeding even those of hyper-Cloud providers such as Amazon. Business IT can choose to transform


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

Storage Intelligence - about time!

I was reading recently an article about Backblaze releasing storage designs. This is a 180TB NAS device in 4U! Absolutely huge! A 42U rack would be able to have around 1.8Petabyte in a single rack.


When thinking about Petabytes, one thinks about the big players in storage, EMC/NetApp/HDS, selling tens of storage racks covering substantial parts of the datacenter floor space and offering a fraction of this capability.

Vmax_images Fas_index

Clearly, the storage profile of what the large monolithic enterprise arrays offer is different. However, Backblaze clearly highlights the ability to get conventional "dumb" storage easily and at low cost! Packing some flash  cache or SSD in front would already bring these boxes to the same I/O capacity;-)

This makes the case that storage per se is not really a challenge anymore. However, making storage aid in the overall performance equation; making sure that storage helps in specific workload acceleration is going to be critical going forward. Basically Intelligent Storage!

Questioning "Accepted Wisdom"

Many IT shops still think of storage as a separate part of their estate. It should simply store data and provide it back rapidly when asked - politely. The continuing stifling of innovation in datacenters due to having a single answer for all questions - namely VMware/hypervisors and server virtualization - tends to stop any innovative thinking that may actually aid an organisation to accelerate those parts of the application landscape leveraging revenue.

Some questions that came to mind and also echoed by clients are:

  • Disk is cheap now. SSD answers my performance needs for storage access. Is there something that together with software actually increases the efficiency of how I do things in the business?

  • For whole classes of typical applications, structured data persistence pools, web servers etc what would "intelligent" storage do for the physical estate and the business consumers of this resource?

  • How can enterprise architecture concepts be overlaid to intelligent storage? What will this mean to how future change programmes or business initiatives are structured and architected?

  • How can current concepts of intelligent storage be used in the current datacenter landscape?

We are seeing the first impact of this type of thinking in the structured data / database world. By combining the database workload with storage and through software enablement we get  intelligent acceleration of store/retrieval operations. This is very akin to having map-reduce concepts within the relational database world.

Further combining storage processing, with CPU/RAM/Networking offload of workload specific storage requests, facilitatest unprecedented scale-out, performance and data compression capabilities.

Oracle's Engineered Systems, the Exadata Database Machine in particular, represents this intelligent storage concept, amongst other innovations, for accelerating the Oracle database workload.

These workload specific constructs foster security of precious data assets, physically and logically. This is increasingly important when one considers that organisations are using shared "dumb" storage for virtual machines, general data assets and application working data sets.

In the general marketplace other vendors (IBM PureSystems + DB2, Teradata, SAP HANA etc) starting to use variations of the technologies for intelligent storage. The level of maturity varies dramatically, with Oracle having a substantial time advantage as first mover.

2013-2015 will see more workload focused solutions materializing, replacing substantial swathes of datacenter assets built using the traditional storage view.

Why is this important for the CIO, CTO & CFO?

Intelligent workload-focused storage solutions are allowing CIO/CTOs to do things that were not easily implemented within solutions based on server virtualization technology using shared monolithic storage arrays - dumb storage - such as in the VMware enabled VCE Vblock and HP CloudSystem Matrix - which are effectively only IaaS solutions.

Workload specific storage solutions are allowing much greater consolidation ratios. Forget the 20-30-40 Virtual Machines per physical server. Think 100s of workloads per intelligent construct! An improvement of 100s of percent over the current situation!

It is important to verify how intelligent storage solutions can be a part of the CIO/CTO's product mix to support the business aspirations as well as simplify the IT landscape. Financing options are also vastly simplified with a direct link between business performance and physical asset procurement/leasing:

  • Intelligent storage removes architectural storage bottlenecks and really shares the compute/IO/networking load more fully.

  • Intelligent storage ensures those workloads supporting the business revenue generating activities are accelerated. Acceleration is linked to the cost of underlying storage assets. As cost of NAND flash, SSDs and rotating disks drop, more is automatically brought into the storage mix to reduce overall costs without disrupting the IT landscape.

  • Greater volumes of historic data are accessible thanks to the huge level of context sensitive, workload-specific data compression technologies. Big data analytics can be powered from here, as well as enterprise datawarehouse needs. This goes beyond simple static storage tiering and deduplication technologies that are unaware of WHAT they are storing!
  • Workload-specific stacking supports much higher levels of consolidation than simple server virtualization. The positive side effects of technologies such as Exadata include the rationalization of datacenter workload estates in terms of variety, operating systems can be rationalized and generally have net-net healthier estate. This means big savings for the CFO!

Intelligent storage within vertically engineered workload specific constructs, what Gartner calls Fabric Based Infrastructure present a more cogent vision of optimizing the organizational's IT capability. It provides a higher level of understanding how precious funding from CFOs is invested to those programmes necessary for the welfare of the concern.

CIO/CTOs still talking about x86 and server virtualization as the means to tackle every Business IT challenge would be well advised to keep an eye on this development.

Intelligent storage will be a fundamental part of the IT landscape allowing effective competition with hyperscale Cloud Providers such as Google/Amazon and curtailing the funding leakage from the business to external providers.


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

Desktop of the Future? What Desktop? - Shake up of the Digital Workspace!

I have been saying for some time that CIO/CTOs should be taking a serious look at the "desktop" of the future. What type of applications are needed, where they are sourced from, form factor and the impact on workplace design.

Tablets are already firmly entrenched in consumer hands, and indeed the hands of professionals. However, they fail to make a dent against the estate of desktop/laptop devices and the associated workplace implications.

Google Glass perhaps represents the next evolution of the desktop->laptop->netbook->tablet saga. This not only challenges the existing notions of mobility, but adds to the rich experience that you and I are demanding.


Google-glass-patent-2-21-13-01This is not necessarily a new idea. We have been seeing this in sci-fi movies for decades. Indeed, Sony also had a similar ambition around a decade ago.

However, this design, with modern materials and the massive backing of Google may well succeed en masse compared to previous designs.


The ability to link into Cloud based backends, stream applications magically/wirelessly, as well as surely providing input devices of corresponding smartness - maybe something tracking eye movements as we see for disabled individuals.

The really interesting feature that is involved is augmented reality for everyone. Google has been investing heavily in technology for mapping the real virtual in their 3D Google Maps. This investment is really going to pay off by providing a compelling lead.

Augmented reality

The practical application of information gathered all around us, and mapped onto our line of vision will increase our ability to navigate an increasingly complex world.

Such a concept could be easily translated to our business or indeed consumer worlds. Why can a stock trader not look around, cuk in relevant information about world events, annotate verbally on impact on stock price trends, and then have an automated trading engine supporting the actual trading desk.

A consumer could walk into a shopping center and automatically be presented with details of competitive offers in the area/online, trace the production chain to check the articles green credentials, and indeed order online from the same shop that the person is currently in - reducing the need to keep large quantities of stock at hand. And the effect on advertising.....well!

An office or knowledge worker would be literally able to grab data, manipulate it, enter data and essentially carry out any other task without the need for the traditional stuffy workplace environments we are surrounded by. Yuo never know, perhaps we could end up with workplaces that look like....(film: Matrix Trilogy - Zion control room)

Questioning "Accepted Wisdom"

Traditional ideas of the workspace environment being provided for employees and partners should seriously be challenged. It is not necessarily about whether a Microsoft 8 or Ubuntu will be rolled out. It is very much about providing a safe secure working environment, protecting data assets of an organization and increasing efficiency dramatically.

  • Challenge the idea of a desk! Find out what types of work environments are conducive for your staff, ranging from highly creative to highly task oriented individuals.

  • Understand how the walls of the organization can be made safely permeable. How can the technology enable an individual to be out there - physically, socially, online etc? What if voice recognition and real-time language translation capabilities are added?

  • What applications classes are being used? Where are they being consumed, in which form, are they on-premise sourced or from a SaaS/Public Cloud provider?

  • Understand the impact on the physical workspace, its constituent parts, how work and play can be mixed? Cost control and making creative workplaces can be benefit dramatically in this mindshift!
The way the physical world and information is being represented as digital assets that can be manipulated without regard to time and space constraints represents a dramatic opportunity for organization willing to challenge the current accepted wisdom.

Why is this important for the CIO, CTO & CFO?

CIO/CTOs are responsible for the overall digital enterprise architecture and workplace environment. The CFO has the fun work of ensuring sufficient funding whilst maintaining cash flow and a generally healthy financial posture. 

Rather than simply accepting what your IT staff are telling you regarding options for workspace computing based on a legacy view of software and physical computing devices, challenge what could be done by leaping forward.

  • How would this improve the financial situation regarding licensing, physical desktop estate, building space, furniture and employee productivity?

  • Understand what is needed from a human capital management perspective? Legislation and health may well be key drivers or blockers. What would such a workplace mean for talent management and making your organisation a 1st class address for potential graduates/employees?

  • From an enterprise architecture (a middle up-down architectural approach) what would be needed to service such a workplace environment and the implications on business stakeholder aspirations as they expressed currently?

  • Security, physical and digital, needs to be more pervasive but also more invisible. Data assets may needs to be stored in more secure ways in corporate datacenters. Databases are fine, but database firewalls, controlling who is doing what and why, would be required. Ditto for other areas powering the enterprise.
  • Database silos, using workload stacking and virtualization makes sense. However, combining them into the general compute environments together with other general data assets is probably not secure enough. A level of physical security should be ensured.
  • Understand if current investment in virtual desktop infrastructure (VDI) makes sense. Does it capture new digital realities? Understand new demands on mobility. Are you investing in something that literally will disappear?

Many organisations are experimenting at scale with full mobile (as we understand it today - usually tablets, smart phones, mobile video conferencing) technology.

They are experiencing the implications on the cost of providing IT capability, the impact on how employees are exploiting the technology, and overall performance of the organisation.Subtle areas such as training, motivation, social contact implications if working away from a "traditional office environment" need to be factored in.

This is a great chance to innovate and basically get the organization in top form for globalisation where labour factor costs are continually driving employment further offshore (until it comes full circle - the earth is indeed round and not flat) Smileyearth2


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

The Shape of Things to Come!

A lot of what I do involves talking with thought leaders from organizations keen to transform how they do business. In some cases, they espouse thoughts moving along general industry lines or marketing. However, in some cases, there is real innovative thought taking place. I believe firmly innovation starts with questioning the current status quo.

We are bombarded by Intel x86 as the ultimate in commodity processor offering everything one could possibly imagine on the one side, and public cloud on the other hand as the doom of in-house IT centers. It is incumbent on all in this industry to think beyond even the cloud as we know it today.

Questioning "Datacenter Wisdom"

This blog entry in entitled the Shape of Things to come with a clear series of ideas in mind:

  • System-on-a-Chip (SOC) are getting very powerful indeed. At what point are these so powerful that they represent the same order of magnitude as an entire hyperscale datacenter from Google or Amazon with a million machines inside?

  • Why does in-house IT have to move out to the cloud? Why could hyperscale clouds not be built up from capacity that organizations are already putting in place? This would be akin to the electricity grid as the transport for capacity created from multiple providers. Borrowing capacity could be done in-industry or across-industries.

  • Why is there disaggregation of all components at a physical datacenter level (CPU, RAM, storage, networking etc) rather than having assembly lines with appliances/constructs hyper-efficient at a particular task within the enterprise portfolios of services and applications?

  • Why are servers still in the same form factor of compute, memory, networking and power supply? Indeed why are racks still square and datacenter space management almost a 2-dimensional activity? When we have too many people living in a limited space we tend to build upwards, with lifts and stairs to transport people. Why not the same for the datacenter?

I 'm not the only one asking these questions. Indeed, in the industry the next wave of physical manifestation of new concepts is taking place albeit slowly. I wanted to share some industry insight as examples to whet the appetite.

  • At Cornell University a great whitepaper on cyclindrical racks using 60GHz wireless transceivers for interconnects within the rack show a massively efficient model for ultrascale computing.
  • RoundWirelessServerack

  • Potentially the server container would be based on a wheel with servers as cake slice wedges plugged into the central tube core. Wheels would be stacked vertically. Although they suggest wireless connectivity, there is no reason why the central core of the tube could not carry power, networking and indeed coolant. Indeed the entire tube could be made to move upwards and downwards - think tubes kept in fridge like housings (like in the film Minority Report!)
  • MinorityReport

  • One client suggested that CPUs should be placed into ultracooled trays that can use the material of the racks as conductors and transport to other trays full of RAM. We do this with hard disks using enclosures. Indeed Intel does 3D chip stacking already!
    • Taking the Intel 22nm Xeons with 10 cores or indeed Oracle's own SPARC T5 at 28nm and 16 cores as building blocks
    • A 2U CPU tray would allow say 200 such processor packages. This is an enormous capability! For the SPARC T5 this would be 3200 cores, 25600 threads and 11Thz of aggregate power!
    • Effectively, you could provide capacity on the side to Google!
    • A RAM tray would basically allow you to provide 20TB+ depending on how it is implemented (based on current PCIe based SSD cards).
  • Fit-for-purpose components for particular workloads as general assembly lines within an organization would fit in well with the mass-scale concepts that the industrial and indeed digital revolutions promoted.
    • If we know that we will be persisting structured data within some form of relational database, then why not use the best construct for that. Oracle's Engineered Systems paved the way forward for this construct.
    • Others are following with their own engineered stacks.
    • The tuning of all components and the software to do a specific task that will be used for years to come is the key point!

So the technical components in this radical shake up of the datacenter are materializing. We haven't even started to talk about some of te work happening in material science providing unparalleled changes in CPUs (up to 300GHz at room temperature) or even non-volatile RAM totally replacing spinning disk and possibly SSD and DRAM.

Why is this important for the CIO, CTO & CFO?

Customers typically ask whether they should move everything out to cloud providers such as Google/Amazon or private cloud hosters such as CSC/ATOS/T-Systems. Well looking at the nexus of technological change that is almost upon us, I would say that at some level it might make sense to evaluate the mix of on-premise and off-premise resource.

The Cloud is effectively a delivery model - some applications such as email clearly can be in the public cloud - bearing in mind privacy issues. However the capabilities needed for an organization to thrive as expressed in Enterprise Architecture in order to exploit market forces can be expressed in other ways.

  • Server virtualization relies on workloads not taking all the resources of a physical server. You should be questioning why the software, the most expensive components, is not being used to its maximum? Solving server acquisition costs does not reduce costs for you in a meaningful way.

  • Entertain the idea that with acceleration at every level of the stack, information requests may be serviced in near-realtime! The business should be asking what it would do with that capability? What would you do differently?

  • Datacenter infrastructure may change radically. It may well be that the entire datacenter is replaced by a tube stacked vertically that can do the job of the current football field sized datacenter. How can you exploit assembly line strategies that will already start to radically reduce the physical datacenter estate? Oracle's Engineered Systems are one approach for this for certain workloads, replacing huge swathes of racks, storage arrays and network switches of equipment.

  • Verify if notions of desktops are still valid. If everything is accessible with web based technologies, including interactive applications such as Microsoft Office, then why not ensure that virtual desktops are proactively made obsolete, and simply provide viewing/input devices to those interactive web pages.

  • Middleware may well represent a vastly unexplored ecosystem for reducing physical datacenter footprints and drastically reducing costs.
    • Networking at 100+Gbps already enables bringing your applications/web powered effective desktops with interaction to the users' viewing devices wherever they are.
    • Use intra-application constructs to insulate from the technical capability below. Java applications have this feature built-in, being cross platform by nature. This is a more relevant level of virtualization than the entire physical server.

  • Security should be enabled at all layers, and not rely on some magic from switch vendors in the form of firewalls. It should be in the middleware platforms to support application encapsulation techniques, as well as within pools of data persistence (databases, filesytems etc).
Enterprise architecture is fueling a new examination of how business defines the IT capabilities it needs to thrive and power growth. This architecture is showing the greater reliance on data integration technologies, speed to market and indeed the need to persist greater volumes of data for longer periods of time.

It may well be incumbent on the CIO/CTO/CFO to pave the way for this brave new world! They need to be already ensuring that people understand what is impossible now, technically or financially, will sort itself out. The business needs to be challenged on what it would do in a world without frontiers or computational/storage limitations?

If millions of users can be serviced per square round meter of datacenter space using a cylindrical server tube wedge/slice - why not do it? This is not the time for fanatics within the datacenter that are railroading discussions to what they are currently using - or to provide the universal answer "server virtualization from VMware is the answer, and what is the question?".

Brave thinking is required. Be prepared to know what to do when the power is in you hands. The competitive challenges of our time require drastic changes. Witness what is happening in the financial services world with traders being replaced by automated programs. This requires serious resources. Changes in technology will allow this to be performed effortlessly with the entire stock market data kept in memory, and a billion risk simulations run per second!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

A Resurgent SPARC platform for Enterprise Cloud Workloads (Part 2) - SPARC T5

Some time ago, I blogged about the resurgence of the SPARC platform. The then newly designed SPARC T4 was showing tremendous promise in its own write to be able to take up its former mantle of being an innovation leader running extreme workloads with the Solaris 11 operating system.

Indeed, it was used as the driving engine of the SPARC Supercluster for dealing with not just massive acceleration of Oracle database workloads using the Exadata Storage Cell technology, but the ability to combine firmware embedded near-zero overhead virtualization concepts for electrically separate logical domains, carving up the physical hardware, and Solaris zones which allow near-native "virtual machines" sharing an installed Solaris operating system.

Up to 128 virtual machines (zones) supported on a system - a vast improvement over the 20-30 one gets under VMware-like hypervisors typically!

This welcome addition to the wider Oracle engineered systems family allowed the missing parts of the datacenter to be consolidated - these being typically glossed over or totally skipped when virtualization with VMware-like hypervisors was discussed. Customers were aware that their mission critical workloads were not always able to run with an x86 platform which was then further reduced in performance using a hypervisor to support large data set manipulation.

Well the rumor mills have started as the run up to Oracle Openworld 2012 at the end of September. One of the interesting areas is the "possible" announcement of the SPARC T5 processor. This is interesting in its own right as we have steadily been seeing the SPARC T4 and now the T5 having ever greater embedded capability in silicon to drive database consolidation and indeed the entire WebLogic middleware stack together with high-end vertical applications such as SAP, EBusiness Suite, Siebel CRM and so on.

Speculating on what "rumors" and the Oracle SPARC public roadmap, I'd like to indicate where I see this new chip making inroads in those extreme cloud workload environments whilst maintaining the paradigm of continuous consolidation. This paradigm that I outlined in a blog in 2010 is still very relevant - the SPARC T5 providing alternative avenues than simply following the crowd on x86.

Questioning "Datacenter Wisdom"

The new SPARC T5 will have, according to the roadmap the following features and technologies included:

  • Increasing System-on-a-Chip (SOC) orientation providing ever more enhanced silicon accelerators for offloading tasks that software typically struggles with at cloud scale. This combines cores, memory controllers, I/O ports, accelerators and network interface controllers providing a very utilitarian design.
  • 16 cores from the T4's 8-core. This takes them right up to the top end in core terms.
  • 8 threads per core - giving 128 threads of execution per processor providing exceptional performance for threaded applications such as with Java and indeed the entire SOA environment
  • Core speeds of 3.6GHz providing exceptional single threaded performance as well as the intelligence to detect thread workloads dynamically (think chip level thread workload elasticity)
  • Move to 28nm from 40nm - continuous consolidation paradigm being applied at silicon level
  • Crossbar bandwidth of 1TB/s (twice that of the T4) providing exceptional straight line scaling for applications as well as supporting the glueless NUMA design of the T5
  • Move to PCIe Generation 3 and 1TB/s memory bandwidth using 1GHz DDR3 memory chips will start to provide the means of creating very large memory server configuration (think double-digit TB of RAM for all in-memory workload processing)
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • Database workload stacking becomes even more capable and effective than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

This in itself at the processor level is really impressive, but the features that are on the roadmap aligned to the T5 possibly are the real crown jewels:

  •  on-die crypto accelerators for encryption (RSA, DH, DSA, ECC, AES, DES,3DES, Camellia, Kasum) providing excellent performance through offloading. This is particularly relevant in multi-tenant Cloud based environments
  • on-die message digest and hashing accelerators (CRC32c, MD5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512) providing excellent security offloading. Again particularly relevant in multi-tenant environments
  • on-die accelerator for random number generation
  • PCIe Generation 3 opens the door to even faster Infiniband networking (56Gbps instead of the current 40Gbps - with active-active links being possible to drive at wire speed)
  • Hardware based compression which will seriously reduce the storage footprint of databases. This will provide further consolidation and optimization of database information architectures.
  • Columnar database acceleration and Oracle number acceleration will provide extremely fast access to structured information. Further, when combined with in-memory structures, the database will literally be roaring !

Indeed when we think that the Exadata Storage cells will also be enhanced to support new chip generations, flash density as well as other optimizations, the next SPARC Supercluster (which has the embedded Exadata storage cells) will literally be one of the best performing database platforms on the planet!

To ignore the new SPARC T5 (whenever it arrives) is to really miss a trick. The embedded technology provides true sticky competitive advantage to anyone that running a database workload or indeed multi-threaded applications. As a Java platform, middleware and SOA platform as well as vertical application platform, the enterprise can seriously benefit from this new innovation.

Why is this important for the CIO & CFO?

CIOs and CFOs are constantly being bombarded with messages from IT that x86 is the only way to go, that Linux is the only way to go, that VMware is the only way to go. As most CFOs will have noted by now:

  • Financially speaking - the x86 servers may have been cheaper per unit, but the number of units is so large to get the job done that any financial advantage that might have been there has evaporated!
  • Overall end-2-end costs for those services that the CIO/CFO signed off on are never really well calculated for the current environment.
  • Focused investment on those activities that support revenue streams and those technologies that will continue to do that for at least the next decade with capacity upgrades of course
  • There must be other ways of doing things that make life easier and more predictable

Well Engineered Systems with the new SPARC T5 represent a way for the CIO/CFO to be able to power those projects that need investment which in turn drive revenue and value. The ability to literally roll the SPARC SuperCluster or any other Engineered System is going to be instrumental in:

  • Shortening project cycles at the infrastructure level
    • don't lose 6 months on a critical ERP/CRM/Custom application project in provisioning hardware, getting unexpected billing for general infrastructure layers such as networking that have nothing to do with this project, IT trying to tune and assemble, getting stuck in multi-vendor contract and support negotiations etc.
    • That time can be literally worth millions - why lose that value?
  • Concentrate valuable and sparse investment strategies literally to the last square meter in the datacenter!
    • If that next project is a risk management platform, then IT should be able to give exactly to the last datacenter floor tile the resources that are needed for that one project alone and the cost
    • Project based or zero-budgetting will allow projects to come online faster, predictably, reuse of existing platforms dealing with the load as well as supporting continuous workload consolidation paradigms
    • Finance enterprise architecture projects that put in the enabling conditions to support faster turnaround for critical revenue focused/margin increasing project activity
Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business returns.

As a CIO it is important to recognize the value that Engineered Systems and the SPARC platform, as part of an overall datacenter landscape, bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

As Oracle and others proceed in acquiring or organically developing new capabilities in customer facing technologies, managing exabyte data sets it becomes strategically important to understand how that can be dealt with.

Hardware alone is not the only answer. Operating systems need to be able to deal with big thinking and big strategy as do applications and the hardware. By creating balanced designs that can then scale-out a consistent effective execution strategy can be managed at the CIO/CTO/CFO levels to ensure that business is not hindered but encouraged to the maximum through removing barriers that IT may well have propagated with the state of the art many years ago.

Engineered Systems enable and weaponize the datacenter to directly handle the real-time enterprise. High-end operating systems such as Solaris and the SPARC processor roadmap are dealing with the notions of having terabyte datasets, millions of execution threads and thousands of logical domains with hundreds of zones (virtual machines) each per purchased core.

Simply carving up a physical server's resources to make up for the deficiencies of operating system/application in dealing with workloads can't be an answer by itself. This is what is also fueling the Platform-as-a-Service strategies partly. How to get systems working cooperatively together to deal with more of the same workload (e.g. database access/web server content for millions of users) or indeed different workloads spread across systems transparently is the question!

High performance computing fields have been doing just this with stunning results albeit at extreme cost conditions and limited workloads. Engineered systems are facilitating this thinking at scale with relatively modest investment for the workloads being supported.

It is this big thinking from organizations such as Oracle and others, who are used to dealing with petabytes of data, and millions of concurrent users that can fulfill  requirements expressed by the CIO/CTO/CFO teams. If millions of users needing web/content/database/analytics/billing can be serviced per square meter of datacenter space - why not do it?


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Datacenter Wisdom - Engineered Systems Must be Doing Something right! (Part 1 - Storage Layer)

Looking back over the last 2 years or so, we can start to see an emerging pattern of acquisitions and general IT industry manouevering that would suggest customer demand and technological capability packaging for specific workloads are more in alignment than ever.

I wanted to to write a couple of blogs to capture this in the context of the datacenter and in wider Oracle engineered systems penetration.

I will start with the Storage Layer has that seems to have garnered tremendous changes in the last 6 months alone although the pattern was already carved out in the early Oracle Exadata release in April 2010 (nice blog on this from Kerry Osborne - Fun with Exadata v2) in its innovative bundling of commodity hardware with specialized software capabilities.

Questioning "Datacenter Wisdom"

As you may know Oracle's Exadata v2 represent a sophisticated blend of balanced components for the tasks undertaken by the Oracle Database whether it being used as a high-transaction OLTP or long running query intensive Datawarehouse. Technologies include:

  • Commodity x86 servers with large memory footprints or high core counts for database nodes
  • x86 servers / Oracle Enterprise Linux for Exadata storage servers
  • Combining simple server based storage in clusters to give enterprise storage array capabilities
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • SAS or SATA interfaced disks for high performance or high capacity
  • PCIe Flash cards
  • Database workload stacking as a more effective means than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

Binding this together is the Oracle 11gR2 enteprise database platform, Oracle RAC database cluster technology allowing multiple servers to work in parallel on the same database and the Exadata Storage Server (ESS) software supporting the enhancements to facilitate intelligent caching of SQL result sets, offloading of queries and storage indices. There is a great blog from Kevin Closson - Seven Fundamentals Everyone Should Know about Exadata that cover this in more detail.

Looking at the IT industry we see:

  • EMC/Isilon acquisition that marries multiple NAS server nodes to an Infiniband fabric for scale-out NAS - indicating that Infiniband has a significant role to play in binding loosely connected servers for massive scalability.
  • EMC/Data Domain+Spectralogic showing that tape is not in fact dead as many are predicting and that it remains an extremely low cost media for Petabyte storage.
  • Embed flash storage (SSD or PCIe based) into servers closer to the workload than simply going across the SAN/LAN wires to an enterprise storage array showing that local storage with flash across a distributed storage node fabric is infinitely more effective than SAN storage for enteprise workloads.
  • EMC/NetApp with intelligent flash usage rather than as replacement for spinning disk significantly enhances certain workloads as we see in EMC's VFCache implementation and NetApp's Intelligent Caching.
  • Monolithic SAN attached arrays moving towards modular scalable arrays supporting the approach taken by Oracle's Pillar Axiom which scales I/O, storage capacity and performance independently using smaller intelligent nodes. EMC is using VMAX engines, NetApp with its GX (Spinnaker) architecture, and even IBM is going that way.

All these trends, and it is not so important really in what chronological order they happened or that I took some examples from leaders in their fields, clearly indicate convergence of technological threads.

I often hear from clients that Exadata is too new, uses strange Infiniband bits and has no link to a SAN array. Well clearly the entire industry is moving that way. Customers are indicating with their voices what they would like to have - capability and simplicity for the workloads that drive their revenue.

Why is this important for the CIO?

CIOs are typically confronted with a range of technologies to solve a limited array of challenges. They are constantly asked by the business and more recently CFOs to make sure that they are:

  • using future proofed technologies,
  • simpler vendor management,
  • focus investment on those activities that support revenue streams,
  • align IT with the business!

Well Engineered Systems are exactly all that. Oracle literally went back to the drawing board and questioned why certain things were done in certain ways in the past and what direct benefit that provided clients.

Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.

Oracle, I believe, has at least a 2 year advantage in that they:

  • learnt from the early stages in the market,
  • fine-tuned their offerings,
  • aligned with support requirements of such dense capability blocks,
  • helped customers come to grips with such a cultural change
  • is continuing to add to its "magic sauce" and still engineering the best of commodity hardware to further increase the value add of Engineered Systems.

The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business challenges.

As a CIO it is important to recognise the value that Engineered Systems bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

Engineered Systems provide the ability for IT to transform itself providing directly relevant Business Services.

It is not a general purpose approach where the IT organisation can hope for transformation - Engineered Systems enable and weaponise the datacenter to directly fulfill  requirements expressed by the CIO team through intense constant dialogue with business leaders!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Inflexibility of Datacenter Culture - 'The way we do things round here' & Engineered Systems

With a focus on large enterprise and service provider datacenter infrastructures, I get the chance to regularly meet with senior executives as well as rank and file administrators - top-2-bottom view.

One of the things that as always struck me as rather strange is the relative "inflexible" nature of datacenters and their management operations.

As an anecdotal example, I recall one organization with a heavy focus on cost cutting. At the same time the datacenter management staff decided that they would standardize on all grey racks from the firm Rittal. Nothing wrong here - a very respectable vendor.

The challenge arising was:

  • The selected Rittal racks at that time were around 12,000 Euro each approximately
  • The racks that came from suppliers such as Dell, Fujitsu, Sun etc were around 6,000 Euro each

See the problem? 50% savings thrown literally out of the window because someone wanted all grey racks. When we are talking about a couple of racks - that is no big deal. With say 1000 racks we are looking at over-expense of 6 million Euro - before anything has been put into those racks!

Upon asking why it was necessary to standardize the racks, the answers I got were:

  • independence from server manufacturers
  • create rack and cabling rows before servers arrive to facilitate provisioning
  • simpler ordering
  • perceived "better-ness" as enclosures is a Rittal specializ

Sounds reasonable at first glance - until we see that racks are all engineered t support certain loads, and typically optimized for what they will eventually contain. Ordering was also not really simpler - as the server vendors made that a "no brainer". Perception of quality was not validated either - just a gut feel.

The heart of the problem as came out later was that the datacenter would benefit from having everything homogenous. Homogenous = standardized for datacenter staff.

The problem with this is that datacenters are not flexible at all, they have a focus on homogeneity and ultimately cost lots of money to the business financing them.

In an age where flexibility and agility means the literal difference between life and death for an IT organization, it is incumbent on management to ensure that datacenter culture allows the rapid adoption of competitive technologies within the datacenter confines.

Standardized does not mean that everything needs to be physically the same. It involves the processes o dealing with change in such as way that it can be effected quickly, easily and effectively to derive key value!

I indicated the recent trend of CIOs reporting to CFOs and this would have provided financial stewardship and accountability in this case - getting staff and managers to really examine their decision in the light of what was best for the organization.

Questioning "Datacenter Wisdom"

The focus on homogenous environments has become so strong that everything is being made to equate a zero-sum game. This happens in cycles in the industry. We had mainframes in the past, Unix, Linux (which is Unix basically for x86), Windows - and the latest is VMware vSphere and all x86!

Don't get me wrong here - as a former EMC employee - I have the greatest respect for VMware and indeed the potential x86 cost savings.

What is a concern is this is translated to "strategy". In this case the approach has been selected without understanding why! It is a patch to cover profligate past spending and hoped that magically all possible issues will be solved.

After all - it is homogenous. All x86, all VMware, all virtualized. Must be good - everyone says it is good!

See what I mean - the thinking and strategizing has been pushed to the side. Apparently there is no time to do that. That is really hard to believe, as this is one of those areas that fall squarely into the CIO/CTO/CFO's collective lap.

There are other approaches, and indeed they are not mutually exclusive. Rather they stem from understanding underlying challenges - and verifying if there are now solutions to tackle those challenges head-on.

Why is this important for the CIO?

At a time of crisis and oversight, it is incumbent on the CIO to question the approach put on his/her table for Datacenter Infrastructure transformation.

The CIO has the authority to question what is happening in his/her turf.

At countless organisations, I have performed strategic analysis of macro challenges mapped to the IT infrastructure capability of an organization to deal with those changes. Time and again, when discussing with the techies and managers (who were from a technical background but seem to struggle with strategy formulation itself) it was shown that the marginal differences in technologies were not enough to justify the additional expenditure - or that there were other approaches.

Engineered Systems, in Oracle parlance, are one such challenge. They do not "fit" the datacenter culture. They can't be taken apart and then distributed into the slots that are free in racks spread over the datacenter.

From a strategy perspective, a great opportunity is not being exploited here. Engineered systems, such as Exadata, Exalogic, SPARC SuperCluster, Exalytics,  and Oracle Data Appliance represent the chance to change the Datacenter culture and indeed make the whole datacenter more flexible and agile.

They force a mindset change - that the datacenter is a housing environment that contains multiple datacenters within it. Those mini-datacenters each represent unique capabilities within the IT landscape. They just need to be brought in and then cabled up to the datacenter network, power, cooling and space cabilities of the housing datacenter.

There are other assets like this in the datacenter already - enterprise tape libraries provinding their unique capability to the entire datacenter. Nobody tries to take a tape drive or cartridge out and place it physically somewhere else!

Engineered Systems are like that too. Taking Exadata as an example. This is clearly assembled and tuned to do database work with the Oracle Database 11gR2 platform. It is tuned and tweaked to do that extremely well. It breaks down some of the traditional barriers of datawarehouse and OLTP workloads and indeed allows database workloads to be "stacked".

Taking the idea of what a datacenter really should be (facilities for storing and running IT infrastructure) and being flexible - Exadata should literally be placed on the floor, cabled to the main LAN and power conduits and the database infrastructure platform is in place. After that databases can be created in this mini-Datacenter. The capability is literally available immediately.

Contrast this with creating lots of racks in rows where it is not certain what will be in those racks, put VMware everywhere, add lots of SAN cabling as I/O will always be an issue - and then spend ages in tuning performance to make sure it works well.

The CIO should identify this as a waste of time and resources. These are clever people who should be doing clever things for the benefit of an organisation. It would be similar to the idea of buying a car to drive around in OR getting the techies to buy all the components of the car and trying to assemble.

This loss of value inherent in the Exadata when taking a x86/hypervisor route to creating many virtual virtual machines running each a separate database make no real sense in the case of databases.

The CIO can use valuable organizational knowledge gained over many years regarding the functions the business needs. If, as in this example, it is the ability to store/retrieve/manage structured information at scale - the answer should literally be to bring in that platform and leverage the cascading value it provides to the business.

Neither x86 or a certain operating system is "strategically" relevant - in that it is a platform - and the normal DBAs can manage this using tools they already know. This mini-datacenter concept can be used in extremely effective ways and supports the notion of continuous consolidation.

CIOs can get very rapid quick-wins for an organization in this way. Datacenter infrastructure management and strategy should be considered in terms of bringing in platforms that can do their job well with software and hardware well tuned to run together. Further, they should reduce "other" assets and software that is needed.

Exadata does with not needing a SAN switch/cabling infrastructure - it encapsulates paradigms of virtualization, cloud and continuous consolidation. This will drive deep savings and allow value to be derived rapidly.

Challenge datacenter ideas and culture in particular. Agility requires being prepared to change things and being equipped to absorb change quickly!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.