Storage Intelligence - about time!
Expressions of Big Data: Big Data Infrastructure BUT Small is Beautiful !!

In Memory Computing (IMC) Cloud - so what about the Memory?

There's been a lot of talk in 2013 about in-memory computing (IMC), with Gartner indicating strategic significance in 2012. Very little has been said about the memory needed for IMC!

IMC is claimed to be "new", "radical", "never before done in the industry" etc. Much of this has been from SAP's HANA marketing amongst others. The discussion is IT industry relevant and all workloads delivered locally or through IMC-enabled Clouds!

Larry Page, after proposing holding the Internet in memory in 2000 at the Intel Developer Forum,  moved forward with this idea of holding the Internet in memory -  resulting in Google. He had only 2,400 computers in the Google datacenter then!

The industry has responded in like kind - by stating large amounts of memory have been available in platforms for nigh on a decade. Indeed, the latest Oracle Exadata X3-8 Engineered System has 4TB of RAM and a 22TB of PCIe Flash - non-volatile RAM.

Exadata_Arch

So IMC is not new in the sense SAP and others would have you believe. It is a natural evolution of economies of scale bringing price/GB down accompanied by technological speed & capacity innovations.

A purist approach based on DRAM (nanosecond access) alone has a vast cost difference from NAND Flash (microseconds - 1,000x slower) and spinning disk (milliseconds) technolgies today - whilst being volatile - data gone on power cycling! Economically speaking - a hybrid approach has to be taken as a road to the IMC-Cloud!

Amongst the characteristics facilitating wholesale transformation to full IMC (hardware, software, application architectures) are:

  1. Performance - nanosecond to microseconds as DRAM/Flash currently
  2. Capacity -Terabytes initially and then Petabytes as Flash/Disk currently
  3. Volatility - non-volatile on power cycling much in the same way as Flash/Disk today
  4. Locality - as close as possible to the CPU, but needs to be manageable at cloud scale!

Individually, each of these characteristics has been achieved. Combined, they are technically challenging. Many promising technologies are evolving aiming to solve this quandry and change the face of computing as we know it forever. Basically Massive, Low-power consuming non-volatile RAM!  

Advances are being made in all areas:

  • HMC - Hybrid Memory Cube using stackable DRAM chips resulting in 320GB/s throughput (vs. DDR3 maxing out at 24GB/S) and 90% less space with 70% less energy. Still volatile though!
    RAM_hmc_layers
  • Phase-change memory (PCM/PRAM) producing non-volatile RAM. Micron already has this shipping in 1GB chips (in 2012). This does not require erase-before-rewriting cycles like Flash so potentially much faster. Current speeds are 400MB/s. 
  • HDIMM - Hybrid DIMMs - combine high-speed DRAM with non-volatile NAND storage (Flash). Micron (with DDR4) and Viking Technology (DDR3 NVDIMM) have these technologies with latencies of 25 nanoseconds.
    HDIMM1
  • NRAM - Carbon nanotube based non-volatile RAM. Nantero and Belgium's IMEC are working jointly to create this alternative to DRAM and scaling below 18nm sizes. Stackable like HMC and all non-volatile.
    Nram8
  • Graphene (single layer of carbon atoms) based non-volatile RAM such as efforts in 2013 in Lausanne, CH.
    Mos2_graphene_nvm

Which of these will be driving future architectures remains to be seen in the sensitive price-capacity-performance markets. Post-silicon era options based on carbon/graphene nanotube technology would of course power the next wave of compute as well as memory structures.

Questioning "Accepted Wisdom"

So In-Memory Computing is coming - has been for the last decade or so. So, are datacenter infrastructure and application architectures keeping pace or at least preparing for this "age of abundance" where non-volatile RAM is concerned?

In discussions with folks from IT and industry colleagues there is a clear focus on procurement at low price points with IT simply saying "everything is commodity"! This is like saying two cars of identical model/make/engine with different chip management software are the same - one clearly perfoms better than the other! The software magic in these hardware and software stacks makes them anything but commodity.

Many IT shops still think a centralized array of storage is the only way to go. They basically change media within the array to change storage characteristics - 7.2/10/15K RPM spinning disks to SSD drives. That is where their thinking essentially stops!

This short-term thinking will effectively result in the next wave of infrastructure and application sprawl OR revolution through IMC-Cloud enabled vectors turning IT on its colective head.

This would simply be too slow as a model for IMC Clouds. There are some clear trends emerging indicating how CIO/CTO and CFOs can prepare for IMC based datacenters of the future to drastically increase capability while changing the procurement equations:

  • Modular storage containers located close to processor & RAM driving the move away from islands of central/massive/SAN infrastructures.
  • Internetworking needs to be way faster to leverage IMC capabilities. Think 2013 for 40Gbps GbE/Infiniband now. Think 2016 for PCIe4 at 512Gb/s (x16 lane duplex). That speed is needed at least at the intersection points of compute/RAM/Storage!
  • Engineered (hardware and software optimized) for entire platforms. Simply not worth focusing all IT effort on individual best-of-breed components when "the whole needs to be greater than the sum of parts!".
  • Backup architectures need to keep up. Tape remains a cost-effective media for inactive/data-backup data sets, particularly when in open source Linear Tape File System (LTFS) format. A great blog on that from Oracle-StorageTek's Rick Ramsay.
  • Application architectures need to move away from bottleneck-resolution thinking! Most developers don't know what to do with Tbytes of RAM! Applications need to be massively parallel patterns where possible. Developers need to deliver data in real-time!
2013-2016 will see the strong rise of non-volatile memory technology and architectures. CIO/CTOs should be thinking about how they will leverage these capabilities. IT philosophers need to discuss and map out implications before handing over to Enterprise Architectes to enact.

Why is this important for the CIO, CTO & CFO?

Simple server consolidation has had its day! Most IT shops have used server virtualization in one form or another. The early fast returns are almost exhausted! Continuous workload consolidation needs to take center-front stage again- think thousands of workloads per server not 20-40!

Private IMC-Clouds provide an ability for CIOs to keep in-house IT relevant to the business.

CTOs should be thinking about how IMC-Clouds can power the next wave of innovative applications-services-products in an increasingly interconnected always-on manner. Scaling, performance, resiliency to failure should be designed into application platforms - NOT applications themselves. Fast moving application development can then proceed without recreating these features in every app.

For the CFO, IMC-enabled Private Clouds represent dramatic lowering of all costs associated with IT to the business. Consolidating massive chunks of datacenter infrastructure, decommission datacenters, simplify demands on CFO resources for more performance/capacity will allow CFOs to free trapped financial value that can be used directly by the business. Tech-refresh cycles may need to be shortened to bring this vision to fruition earlier!

IMC-enabled Clouds, combined with Intelligent Storage will allow fundamental transformations to take place at paces exceeding even those of hyper-Cloud providers such as Amazon. Business IT can choose to transform

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.