Exadata

In Memory Computing (IMC) Cloud - so what about the Memory?

There's been a lot of talk in 2013 about in-memory computing (IMC), with Gartner indicating strategic significance in 2012. Very little has been said about the memory needed for IMC!

IMC is claimed to be "new", "radical", "never before done in the industry" etc. Much of this has been from SAP's HANA marketing amongst others. The discussion is IT industry relevant and all workloads delivered locally or through IMC-enabled Clouds!

Larry Page, after proposing holding the Internet in memory in 2000 at the Intel Developer Forum,  moved forward with this idea of holding the Internet in memory -  resulting in Google. He had only 2,400 computers in the Google datacenter then!

The industry has responded in like kind - by stating large amounts of memory have been available in platforms for nigh on a decade. Indeed, the latest Oracle Exadata X3-8 Engineered System has 4TB of RAM and a 22TB of PCIe Flash - non-volatile RAM.

Exadata_Arch

So IMC is not new in the sense SAP and others would have you believe. It is a natural evolution of economies of scale bringing price/GB down accompanied by technological speed & capacity innovations.

A purist approach based on DRAM (nanosecond access) alone has a vast cost difference from NAND Flash (microseconds - 1,000x slower) and spinning disk (milliseconds) technolgies today - whilst being volatile - data gone on power cycling! Economically speaking - a hybrid approach has to be taken as a road to the IMC-Cloud!

Amongst the characteristics facilitating wholesale transformation to full IMC (hardware, software, application architectures) are:

  1. Performance - nanosecond to microseconds as DRAM/Flash currently
  2. Capacity -Terabytes initially and then Petabytes as Flash/Disk currently
  3. Volatility - non-volatile on power cycling much in the same way as Flash/Disk today
  4. Locality - as close as possible to the CPU, but needs to be manageable at cloud scale!

Individually, each of these characteristics has been achieved. Combined, they are technically challenging. Many promising technologies are evolving aiming to solve this quandry and change the face of computing as we know it forever. Basically Massive, Low-power consuming non-volatile RAM!  

Advances are being made in all areas:

  • HMC - Hybrid Memory Cube using stackable DRAM chips resulting in 320GB/s throughput (vs. DDR3 maxing out at 24GB/S) and 90% less space with 70% less energy. Still volatile though!
    RAM_hmc_layers
  • Phase-change memory (PCM/PRAM) producing non-volatile RAM. Micron already has this shipping in 1GB chips (in 2012). This does not require erase-before-rewriting cycles like Flash so potentially much faster. Current speeds are 400MB/s. 
  • HDIMM - Hybrid DIMMs - combine high-speed DRAM with non-volatile NAND storage (Flash). Micron (with DDR4) and Viking Technology (DDR3 NVDIMM) have these technologies with latencies of 25 nanoseconds.
    HDIMM1
  • NRAM - Carbon nanotube based non-volatile RAM. Nantero and Belgium's IMEC are working jointly to create this alternative to DRAM and scaling below 18nm sizes. Stackable like HMC and all non-volatile.
    Nram8
  • Graphene (single layer of carbon atoms) based non-volatile RAM such as efforts in 2013 in Lausanne, CH.
    Mos2_graphene_nvm

Which of these will be driving future architectures remains to be seen in the sensitive price-capacity-performance markets. Post-silicon era options based on carbon/graphene nanotube technology would of course power the next wave of compute as well as memory structures.

Questioning "Accepted Wisdom"

So In-Memory Computing is coming - has been for the last decade or so. So, are datacenter infrastructure and application architectures keeping pace or at least preparing for this "age of abundance" where non-volatile RAM is concerned?

In discussions with folks from IT and industry colleagues there is a clear focus on procurement at low price points with IT simply saying "everything is commodity"! This is like saying two cars of identical model/make/engine with different chip management software are the same - one clearly perfoms better than the other! The software magic in these hardware and software stacks makes them anything but commodity.

Many IT shops still think a centralized array of storage is the only way to go. They basically change media within the array to change storage characteristics - 7.2/10/15K RPM spinning disks to SSD drives. That is where their thinking essentially stops!

This short-term thinking will effectively result in the next wave of infrastructure and application sprawl OR revolution through IMC-Cloud enabled vectors turning IT on its colective head.

This would simply be too slow as a model for IMC Clouds. There are some clear trends emerging indicating how CIO/CTO and CFOs can prepare for IMC based datacenters of the future to drastically increase capability while changing the procurement equations:

  • Modular storage containers located close to processor & RAM driving the move away from islands of central/massive/SAN infrastructures.
  • Internetworking needs to be way faster to leverage IMC capabilities. Think 2013 for 40Gbps GbE/Infiniband now. Think 2016 for PCIe4 at 512Gb/s (x16 lane duplex). That speed is needed at least at the intersection points of compute/RAM/Storage!
  • Engineered (hardware and software optimized) for entire platforms. Simply not worth focusing all IT effort on individual best-of-breed components when "the whole needs to be greater than the sum of parts!".
  • Backup architectures need to keep up. Tape remains a cost-effective media for inactive/data-backup data sets, particularly when in open source Linear Tape File System (LTFS) format. A great blog on that from Oracle-StorageTek's Rick Ramsay.
  • Application architectures need to move away from bottleneck-resolution thinking! Most developers don't know what to do with Tbytes of RAM! Applications need to be massively parallel patterns where possible. Developers need to deliver data in real-time!
2013-2016 will see the strong rise of non-volatile memory technology and architectures. CIO/CTOs should be thinking about how they will leverage these capabilities. IT philosophers need to discuss and map out implications before handing over to Enterprise Architectes to enact.

Why is this important for the CIO, CTO & CFO?

Simple server consolidation has had its day! Most IT shops have used server virtualization in one form or another. The early fast returns are almost exhausted! Continuous workload consolidation needs to take center-front stage again- think thousands of workloads per server not 20-40!

Private IMC-Clouds provide an ability for CIOs to keep in-house IT relevant to the business.

CTOs should be thinking about how IMC-Clouds can power the next wave of innovative applications-services-products in an increasingly interconnected always-on manner. Scaling, performance, resiliency to failure should be designed into application platforms - NOT applications themselves. Fast moving application development can then proceed without recreating these features in every app.

For the CFO, IMC-enabled Private Clouds represent dramatic lowering of all costs associated with IT to the business. Consolidating massive chunks of datacenter infrastructure, decommission datacenters, simplify demands on CFO resources for more performance/capacity will allow CFOs to free trapped financial value that can be used directly by the business. Tech-refresh cycles may need to be shortened to bring this vision to fruition earlier!

IMC-enabled Clouds, combined with Intelligent Storage will allow fundamental transformations to take place at paces exceeding even those of hyper-Cloud providers such as Amazon. Business IT can choose to transform

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

Storage Intelligence - about time!

I was reading recently an article about Backblaze releasing storage designs. This is a 180TB NAS device in 4U! Absolutely huge! A 42U rack would be able to have around 1.8Petabyte in a single rack.

Blog-pod30-header

When thinking about Petabytes, one thinks about the big players in storage, EMC/NetApp/HDS, selling tens of storage racks covering substantial parts of the datacenter floor space and offering a fraction of this capability.

Vmax_images Fas_index

Clearly, the storage profile of what the large monolithic enterprise arrays offer is different. However, Backblaze clearly highlights the ability to get conventional "dumb" storage easily and at low cost! Packing some flash  cache or SSD in front would already bring these boxes to the same I/O capacity;-)

This makes the case that storage per se is not really a challenge anymore. However, making storage aid in the overall performance equation; making sure that storage helps in specific workload acceleration is going to be critical going forward. Basically Intelligent Storage!

Questioning "Accepted Wisdom"

Many IT shops still think of storage as a separate part of their estate. It should simply store data and provide it back rapidly when asked - politely. The continuing stifling of innovation in datacenters due to having a single answer for all questions - namely VMware/hypervisors and server virtualization - tends to stop any innovative thinking that may actually aid an organisation to accelerate those parts of the application landscape leveraging revenue.

Some questions that came to mind and also echoed by clients are:

  • Disk is cheap now. SSD answers my performance needs for storage access. Is there something that together with software actually increases the efficiency of how I do things in the business?

  • For whole classes of typical applications, structured data persistence pools, web servers etc what would "intelligent" storage do for the physical estate and the business consumers of this resource?

  • How can enterprise architecture concepts be overlaid to intelligent storage? What will this mean to how future change programmes or business initiatives are structured and architected?

  • How can current concepts of intelligent storage be used in the current datacenter landscape?

We are seeing the first impact of this type of thinking in the structured data / database world. By combining the database workload with storage and through software enablement we get  intelligent acceleration of store/retrieval operations. This is very akin to having map-reduce concepts within the relational database world.

Further combining storage processing, with CPU/RAM/Networking offload of workload specific storage requests, facilitatest unprecedented scale-out, performance and data compression capabilities.

Oracle's Engineered Systems, the Exadata Database Machine in particular, represents this intelligent storage concept, amongst other innovations, for accelerating the Oracle database workload.

These workload specific constructs foster security of precious data assets, physically and logically. This is increasingly important when one considers that organisations are using shared "dumb" storage for virtual machines, general data assets and application working data sets.

In the general marketplace other vendors (IBM PureSystems + DB2, Teradata, SAP HANA etc) starting to use variations of the technologies for intelligent storage. The level of maturity varies dramatically, with Oracle having a substantial time advantage as first mover.

2013-2015 will see more workload focused solutions materializing, replacing substantial swathes of datacenter assets built using the traditional storage view.

Why is this important for the CIO, CTO & CFO?

Intelligent workload-focused storage solutions are allowing CIO/CTOs to do things that were not easily implemented within solutions based on server virtualization technology using shared monolithic storage arrays - dumb storage - such as in the VMware enabled VCE Vblock and HP CloudSystem Matrix - which are effectively only IaaS solutions.

Workload specific storage solutions are allowing much greater consolidation ratios. Forget the 20-30-40 Virtual Machines per physical server. Think 100s of workloads per intelligent construct! An improvement of 100s of percent over the current situation!

It is important to verify how intelligent storage solutions can be a part of the CIO/CTO's product mix to support the business aspirations as well as simplify the IT landscape. Financing options are also vastly simplified with a direct link between business performance and physical asset procurement/leasing:

  • Intelligent storage removes architectural storage bottlenecks and really shares the compute/IO/networking load more fully.

  • Intelligent storage ensures those workloads supporting the business revenue generating activities are accelerated. Acceleration is linked to the cost of underlying storage assets. As cost of NAND flash, SSDs and rotating disks drop, more is automatically brought into the storage mix to reduce overall costs without disrupting the IT landscape.

  • Greater volumes of historic data are accessible thanks to the huge level of context sensitive, workload-specific data compression technologies. Big data analytics can be powered from here, as well as enterprise datawarehouse needs. This goes beyond simple static storage tiering and deduplication technologies that are unaware of WHAT they are storing!
  • Workload-specific stacking supports much higher levels of consolidation than simple server virtualization. The positive side effects of technologies such as Exadata include the rationalization of datacenter workload estates in terms of variety, operating systems can be rationalized and generally have net-net healthier estate. This means big savings for the CFO!

Intelligent storage within vertically engineered workload specific constructs, what Gartner calls Fabric Based Infrastructure present a more cogent vision of optimizing the organizational's IT capability. It provides a higher level of understanding how precious funding from CFOs is invested to those programmes necessary for the welfare of the concern.

CIO/CTOs still talking about x86 and server virtualization as the means to tackle every Business IT challenge would be well advised to keep an eye on this development.

Intelligent storage will be a fundamental part of the IT landscape allowing effective competition with hyperscale Cloud Providers such as Google/Amazon and curtailing the funding leakage from the business to external providers.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

A Resurgent SPARC platform for Enterprise Cloud Workloads (Part 2) - SPARC T5

Some time ago, I blogged about the resurgence of the SPARC platform. The then newly designed SPARC T4 was showing tremendous promise in its own write to be able to take up its former mantle of being an innovation leader running extreme workloads with the Solaris 11 operating system.

Indeed, it was used as the driving engine of the SPARC Supercluster for dealing with not just massive acceleration of Oracle database workloads using the Exadata Storage Cell technology, but the ability to combine firmware embedded near-zero overhead virtualization concepts for electrically separate logical domains, carving up the physical hardware, and Solaris zones which allow near-native "virtual machines" sharing an installed Solaris operating system.

Up to 128 virtual machines (zones) supported on a system - a vast improvement over the 20-30 one gets under VMware-like hypervisors typically!

This welcome addition to the wider Oracle engineered systems family allowed the missing parts of the datacenter to be consolidated - these being typically glossed over or totally skipped when virtualization with VMware-like hypervisors was discussed. Customers were aware that their mission critical workloads were not always able to run with an x86 platform which was then further reduced in performance using a hypervisor to support large data set manipulation.

Well the rumor mills have started as the run up to Oracle Openworld 2012 at the end of September. One of the interesting areas is the "possible" announcement of the SPARC T5 processor. This is interesting in its own right as we have steadily been seeing the SPARC T4 and now the T5 having ever greater embedded capability in silicon to drive database consolidation and indeed the entire WebLogic middleware stack together with high-end vertical applications such as SAP, EBusiness Suite, Siebel CRM and so on.

Speculating on what "rumors" and the Oracle SPARC public roadmap, I'd like to indicate where I see this new chip making inroads in those extreme cloud workload environments whilst maintaining the paradigm of continuous consolidation. This paradigm that I outlined in a blog in 2010 is still very relevant - the SPARC T5 providing alternative avenues than simply following the crowd on x86.

Questioning "Datacenter Wisdom"

The new SPARC T5 will have, according to the roadmap the following features and technologies included:

  • Increasing System-on-a-Chip (SOC) orientation providing ever more enhanced silicon accelerators for offloading tasks that software typically struggles with at cloud scale. This combines cores, memory controllers, I/O ports, accelerators and network interface controllers providing a very utilitarian design.
  • 16 cores from the T4's 8-core. This takes them right up to the top end in core terms.
  • 8 threads per core - giving 128 threads of execution per processor providing exceptional performance for threaded applications such as with Java and indeed the entire SOA environment
  • Core speeds of 3.6GHz providing exceptional single threaded performance as well as the intelligence to detect thread workloads dynamically (think chip level thread workload elasticity)
  • Move to 28nm from 40nm - continuous consolidation paradigm being applied at silicon level
  • Crossbar bandwidth of 1TB/s (twice that of the T4) providing exceptional straight line scaling for applications as well as supporting the glueless NUMA design of the T5
  • Move to PCIe Generation 3 and 1TB/s memory bandwidth using 1GHz DDR3 memory chips will start to provide the means of creating very large memory server configuration (think double-digit TB of RAM for all in-memory workload processing)
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • Database workload stacking becomes even more capable and effective than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

This in itself at the processor level is really impressive, but the features that are on the roadmap aligned to the T5 possibly are the real crown jewels:

  •  on-die crypto accelerators for encryption (RSA, DH, DSA, ECC, AES, DES,3DES, Camellia, Kasum) providing excellent performance through offloading. This is particularly relevant in multi-tenant Cloud based environments
  • on-die message digest and hashing accelerators (CRC32c, MD5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512) providing excellent security offloading. Again particularly relevant in multi-tenant environments
  • on-die accelerator for random number generation
  • PCIe Generation 3 opens the door to even faster Infiniband networking (56Gbps instead of the current 40Gbps - with active-active links being possible to drive at wire speed)
  • Hardware based compression which will seriously reduce the storage footprint of databases. This will provide further consolidation and optimization of database information architectures.
  • Columnar database acceleration and Oracle number acceleration will provide extremely fast access to structured information. Further, when combined with in-memory structures, the database will literally be roaring !

Indeed when we think that the Exadata Storage cells will also be enhanced to support new chip generations, flash density as well as other optimizations, the next SPARC Supercluster (which has the embedded Exadata storage cells) will literally be one of the best performing database platforms on the planet!

To ignore the new SPARC T5 (whenever it arrives) is to really miss a trick. The embedded technology provides true sticky competitive advantage to anyone that running a database workload or indeed multi-threaded applications. As a Java platform, middleware and SOA platform as well as vertical application platform, the enterprise can seriously benefit from this new innovation.

Why is this important for the CIO & CFO?

CIOs and CFOs are constantly being bombarded with messages from IT that x86 is the only way to go, that Linux is the only way to go, that VMware is the only way to go. As most CFOs will have noted by now:

  • Financially speaking - the x86 servers may have been cheaper per unit, but the number of units is so large to get the job done that any financial advantage that might have been there has evaporated!
  • Overall end-2-end costs for those services that the CIO/CFO signed off on are never really well calculated for the current environment.
  • Focused investment on those activities that support revenue streams and those technologies that will continue to do that for at least the next decade with capacity upgrades of course
  • There must be other ways of doing things that make life easier and more predictable

Well Engineered Systems with the new SPARC T5 represent a way for the CIO/CFO to be able to power those projects that need investment which in turn drive revenue and value. The ability to literally roll the SPARC SuperCluster or any other Engineered System is going to be instrumental in:

  • Shortening project cycles at the infrastructure level
    • don't lose 6 months on a critical ERP/CRM/Custom application project in provisioning hardware, getting unexpected billing for general infrastructure layers such as networking that have nothing to do with this project, IT trying to tune and assemble, getting stuck in multi-vendor contract and support negotiations etc.
    • That time can be literally worth millions - why lose that value?
  • Concentrate valuable and sparse investment strategies literally to the last square meter in the datacenter!
    • If that next project is a risk management platform, then IT should be able to give exactly to the last datacenter floor tile the resources that are needed for that one project alone and the cost
    • Project based or zero-budgetting will allow projects to come online faster, predictably, reuse of existing platforms dealing with the load as well as supporting continuous workload consolidation paradigms
    • Finance enterprise architecture projects that put in the enabling conditions to support faster turnaround for critical revenue focused/margin increasing project activity
Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business returns.

As a CIO it is important to recognize the value that Engineered Systems and the SPARC platform, as part of an overall datacenter landscape, bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

As Oracle and others proceed in acquiring or organically developing new capabilities in customer facing technologies, managing exabyte data sets it becomes strategically important to understand how that can be dealt with.

Hardware alone is not the only answer. Operating systems need to be able to deal with big thinking and big strategy as do applications and the hardware. By creating balanced designs that can then scale-out a consistent effective execution strategy can be managed at the CIO/CTO/CFO levels to ensure that business is not hindered but encouraged to the maximum through removing barriers that IT may well have propagated with the state of the art many years ago.

Engineered Systems enable and weaponize the datacenter to directly handle the real-time enterprise. High-end operating systems such as Solaris and the SPARC processor roadmap are dealing with the notions of having terabyte datasets, millions of execution threads and thousands of logical domains with hundreds of zones (virtual machines) each per purchased core.

Simply carving up a physical server's resources to make up for the deficiencies of operating system/application in dealing with workloads can't be an answer by itself. This is what is also fueling the Platform-as-a-Service strategies partly. How to get systems working cooperatively together to deal with more of the same workload (e.g. database access/web server content for millions of users) or indeed different workloads spread across systems transparently is the question!

High performance computing fields have been doing just this with stunning results albeit at extreme cost conditions and limited workloads. Engineered systems are facilitating this thinking at scale with relatively modest investment for the workloads being supported.

It is this big thinking from organizations such as Oracle and others, who are used to dealing with petabytes of data, and millions of concurrent users that can fulfill  requirements expressed by the CIO/CTO/CFO teams. If millions of users needing web/content/database/analytics/billing can be serviced per square meter of datacenter space - why not do it?

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Datacenter Wisdom - Engineered Systems Must be Doing Something right! (Part 1 - Storage Layer)

Looking back over the last 2 years or so, we can start to see an emerging pattern of acquisitions and general IT industry manouevering that would suggest customer demand and technological capability packaging for specific workloads are more in alignment than ever.

I wanted to to write a couple of blogs to capture this in the context of the datacenter and in wider Oracle engineered systems penetration.

I will start with the Storage Layer has that seems to have garnered tremendous changes in the last 6 months alone although the pattern was already carved out in the early Oracle Exadata release in April 2010 (nice blog on this from Kerry Osborne - Fun with Exadata v2) in its innovative bundling of commodity hardware with specialized software capabilities.

Questioning "Datacenter Wisdom"

As you may know Oracle's Exadata v2 represent a sophisticated blend of balanced components for the tasks undertaken by the Oracle Database whether it being used as a high-transaction OLTP or long running query intensive Datawarehouse. Technologies include:

  • Commodity x86 servers with large memory footprints or high core counts for database nodes
  • x86 servers / Oracle Enterprise Linux for Exadata storage servers
  • Combining simple server based storage in clusters to give enterprise storage array capabilities
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • SAS or SATA interfaced disks for high performance or high capacity
  • PCIe Flash cards
  • Database workload stacking as a more effective means than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

Binding this together is the Oracle 11gR2 enteprise database platform, Oracle RAC database cluster technology allowing multiple servers to work in parallel on the same database and the Exadata Storage Server (ESS) software supporting the enhancements to facilitate intelligent caching of SQL result sets, offloading of queries and storage indices. There is a great blog from Kevin Closson - Seven Fundamentals Everyone Should Know about Exadata that cover this in more detail.

Looking at the IT industry we see:

  • EMC/Isilon acquisition that marries multiple NAS server nodes to an Infiniband fabric for scale-out NAS - indicating that Infiniband has a significant role to play in binding loosely connected servers for massive scalability.
  • EMC/Data Domain+Spectralogic showing that tape is not in fact dead as many are predicting and that it remains an extremely low cost media for Petabyte storage.
  • Embed flash storage (SSD or PCIe based) into servers closer to the workload than simply going across the SAN/LAN wires to an enterprise storage array showing that local storage with flash across a distributed storage node fabric is infinitely more effective than SAN storage for enteprise workloads.
  • EMC/NetApp with intelligent flash usage rather than as replacement for spinning disk significantly enhances certain workloads as we see in EMC's VFCache implementation and NetApp's Intelligent Caching.
  • Monolithic SAN attached arrays moving towards modular scalable arrays supporting the approach taken by Oracle's Pillar Axiom which scales I/O, storage capacity and performance independently using smaller intelligent nodes. EMC is using VMAX engines, NetApp with its GX (Spinnaker) architecture, and even IBM is going that way.

All these trends, and it is not so important really in what chronological order they happened or that I took some examples from leaders in their fields, clearly indicate convergence of technological threads.

I often hear from clients that Exadata is too new, uses strange Infiniband bits and has no link to a SAN array. Well clearly the entire industry is moving that way. Customers are indicating with their voices what they would like to have - capability and simplicity for the workloads that drive their revenue.

Why is this important for the CIO?

CIOs are typically confronted with a range of technologies to solve a limited array of challenges. They are constantly asked by the business and more recently CFOs to make sure that they are:

  • using future proofed technologies,
  • simpler vendor management,
  • focus investment on those activities that support revenue streams,
  • align IT with the business!

Well Engineered Systems are exactly all that. Oracle literally went back to the drawing board and questioned why certain things were done in certain ways in the past and what direct benefit that provided clients.

Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.

Oracle, I believe, has at least a 2 year advantage in that they:

  • learnt from the early stages in the market,
  • fine-tuned their offerings,
  • aligned with support requirements of such dense capability blocks,
  • helped customers come to grips with such a cultural change
  • is continuing to add to its "magic sauce" and still engineering the best of commodity hardware to further increase the value add of Engineered Systems.

The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business challenges.

As a CIO it is important to recognise the value that Engineered Systems bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

Engineered Systems provide the ability for IT to transform itself providing directly relevant Business Services.

It is not a general purpose approach where the IT organisation can hope for transformation - Engineered Systems enable and weaponise the datacenter to directly fulfill  requirements expressed by the CIO team through intense constant dialogue with business leaders!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

The Practical Cloud - Road to Cloud Value Derivation (CVD)

After a change of residence and a change of job - it is high time to write another blog!

In talks with many customers and indeed feedback on this blog site, I receive a lot of indication that the "current Cloud" the pre-2011 industry was marketing was simply tackling the infrastructure side of things. Much of the focus was on consolidating the x86 server estate and delivering features such as migration of live virtual machines across physical servers.

While this is fine as the initial steps to deriving some form of value - it is typically too little. Many business leaders and IT managers indicate that "we have bought into the Cloud, we have virtualized and we can even offer a level of VM automation in terms of provisioning" - "so where is this taking us and where can I highlight the ongoing value to the business?".

This is a very valid line of questioning. The client has millions of $$bucks$$ of equipment sitting on the floor, they have attended training and have done everything they were told to do. The result - they can create a virtual machine with software in minutes as opposed to hours or days. Cool!

That is not a lot to show the business for all that investment and evangelism. This is typically (and incorrectly) lauded as a solution and a great win for all!

IaaS alone, in my opinion, was always too little value.  The approach of simply putting servers, storage, network with a thin veneer of hypervisor magic has limited value in itself. This was incidentally the main haunt till 2011 for mainstream hypervisor purveyors.

This type of datacenter transformation using pre-assembled hardware for the sole purpose of consolidating x86 is too simple and let's face it - too dumb. Clients are cleverer than that. Clients have persisted in following the virtualization wave, and that is good. They have somewhat resisted the Cloud marketing till now as it was simply focused on replacement of their existing hardware and hypervisor stack.

Towards the tail end of 2011 we started seeing a stronger focus on provisioning enterprise software and environments - DB as a Service (DBaaS) which was nothing more than installing a database instance on a virtual machine through a browser provisioning page. Well that is better - but still does not smack of value! Indeed, if you want many big instances of databases with say 64 virtual CPUs per VM you were out of luck! AND yes there are customers that do this!

In 2011, we started to see the emergence of the appliance. This was an entire hardware and software stack that was factory installed. In some cases, such as the EMC GreenPlum appliance, this was built using the components with functional tuning to undertake the task. Others such as Oracle with Exadata Database Machine (which has been around since 2008 incidentally - but first used Sun intellectual property acquired in 2010) not only took the idea of virtualization but actually embedded it into all the components in the stack.

Through innovation, integration, best-of-breed technology and the simple idea that a system should do what it is designed for to the best of its ability, Exadata represents, in my opinion, a new approach to transformation that makes real business impact.

I am sure that during 2012 we will see a move away from the generalized Cloud stacks, such as VCE VBlock, Dell prepackaged servers with VMware installed and something similar from HP Virtualsystem for VMware. These systems are all focused at helping the hypervisor - in this case VMware vSphere, perform its job well. However, the hypervisor only lets you manage virtual machines! It does not do anything else!

That is also the reason that I see the move away from expensive hypervisor software solutions towards open source solutions or systems having the hypervisor embedded as a functional technology to support an enterprise software stack - with no $$ for the hypervisor per se.  

The Race to Business Value

One of the issues that has been stagnating business value derivation through Cloud technologies has been the lack of business as a driving stakeholder. Business should be  driving the IT roadmap for an organisation. Business defines what it wants from developers in the form of functionality. Why not the same for IT infrastructure?

You see the value of Business is that it thinks differently. Business tends to think at levels of enterprise architecture holistically as a driver and motor for business value generation! They think frameworks and they think (with developers and architects) in terms of enabling software platforms upon which to further their unique selling points.

The real Cloud value to be derived in that case is based on the software Cloud platforms leveraged to facilitate global service/application delivery with quality of service baked in. These platforms in turn are used to create further value!  

The real business case for the Cloud comes in the form of Platform-as-a-Service (PaaS). I think that Exadata hits this nail on the head. I don't just want to be able to setup a virtual machine running the database inside, I want the functionality of the database itself! Exadata delivers just that through a clever blend of components!

Why is this important for the CIO?

CIOs have set the agenda for Cloud in 2010-2011. They have seen that it has an effect on the delivery of IT services - but not necessarily a direct impact in the culture of the business or indeed the value the business derives. The early gains have been achieved, and it is time to move on to business focused IT.

CIOs look beyond the mainstream hype. They verify through intensive research and peer-level networking the effect of IT strategies on business value. The CIO pioneers and sets the agenda for deep intelligent consolidation. Not just doing more with less - BUT gaining greater business insight and leverage with fewer more effective resources!

Exadata, and engineered systems of that ilk, with embedded technology are paving the way for scale-up/out with extremely high performance and gathering in the benefits/innovations of the IT industry over the last years e.g. unified networking with Infiniband, high performance SSD storage, deduplication, compression, tiered value-oriented storage, big data capable file systems and indeed open source.  

That is a very potent mix, and Oracle customers are actively leveraging this. They have been using Linux and Oracle Solaris 11 to support those enterprise workloads needing that level of reliability and speed. They have been consolidating hundreds of database and middleware servers - yes - hardware, mixed OSs, non-x86 systems, licenses, management tools, script frameworks and so forth. This is real consolidation!

Further, they have used the well respected Oracle 11g enterprise capable platform to power their Java applications, drive the backend of their middleware platforms, created new value by delivering through the Exadata platform applications to the mobile space (iPads, Androids, Browsers, OS independent applications). 

Indeed, if the Java virtual machine (JVM) is one of the ultimate forms of virtualization, it makes perfect sense that as a business which has elected to use that technology you create the underlying infrastructure AND platform ecosystem to support those efforts at scale.

The Corporate Cloud Strategy can be dramatically refreshed and aligned with the ability to deal with all data needs in a single well managed platform. Exadata provides in this case the ability to deal with all database needs that an organisation has from the smallest to the largest. It provides significant front-end direct value. 

Other Exasystems have started to arrive to deal with specific challenges such as big data and middleware. These use the same magic source of Exadata Database Machine, but are tuned/enhanced for their specific functions. Deep lasting transformation can be achieved and the very nature of these Exasystems means the Business must be included as a principal stakeholder - they can truly see what the value of extracting a business insight means in hard $$ terms!

Look out for these paradigms that directly affect business value and indeed allow new business insight to be gained by easily manipulating petabytes of information in near-realtime! They provide the ability for the business to rapidly come to market with new products, support directly application developers, are built on industry-proven technologies - and best of all - retain the key know-how of your developers and DBAs - they will be up and running with little change to their operational routine!    

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.