Previous month:
July 2012
Next month:
September 2012

August 2012

Datacenter Wisdom - Engineered Systems Must be Doing Something right! (Part 1 - Storage Layer)

Looking back over the last 2 years or so, we can start to see an emerging pattern of acquisitions and general IT industry manouevering that would suggest customer demand and technological capability packaging for specific workloads are more in alignment than ever.

I wanted to to write a couple of blogs to capture this in the context of the datacenter and in wider Oracle engineered systems penetration.

I will start with the Storage Layer has that seems to have garnered tremendous changes in the last 6 months alone although the pattern was already carved out in the early Oracle Exadata release in April 2010 (nice blog on this from Kerry Osborne - Fun with Exadata v2) in its innovative bundling of commodity hardware with specialized software capabilities.

Questioning "Datacenter Wisdom"

As you may know Oracle's Exadata v2 represent a sophisticated blend of balanced components for the tasks undertaken by the Oracle Database whether it being used as a high-transaction OLTP or long running query intensive Datawarehouse. Technologies include:

  • Commodity x86 servers with large memory footprints or high core counts for database nodes
  • x86 servers / Oracle Enterprise Linux for Exadata storage servers
  • Combining simple server based storage in clusters to give enterprise storage array capabilities
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • SAS or SATA interfaced disks for high performance or high capacity
  • PCIe Flash cards
  • Database workload stacking as a more effective means than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

Binding this together is the Oracle 11gR2 enteprise database platform, Oracle RAC database cluster technology allowing multiple servers to work in parallel on the same database and the Exadata Storage Server (ESS) software supporting the enhancements to facilitate intelligent caching of SQL result sets, offloading of queries and storage indices. There is a great blog from Kevin Closson - Seven Fundamentals Everyone Should Know about Exadata that cover this in more detail.

Looking at the IT industry we see:

  • EMC/Isilon acquisition that marries multiple NAS server nodes to an Infiniband fabric for scale-out NAS - indicating that Infiniband has a significant role to play in binding loosely connected servers for massive scalability.
  • EMC/Data Domain+Spectralogic showing that tape is not in fact dead as many are predicting and that it remains an extremely low cost media for Petabyte storage.
  • Embed flash storage (SSD or PCIe based) into servers closer to the workload than simply going across the SAN/LAN wires to an enterprise storage array showing that local storage with flash across a distributed storage node fabric is infinitely more effective than SAN storage for enteprise workloads.
  • EMC/NetApp with intelligent flash usage rather than as replacement for spinning disk significantly enhances certain workloads as we see in EMC's VFCache implementation and NetApp's Intelligent Caching.
  • Monolithic SAN attached arrays moving towards modular scalable arrays supporting the approach taken by Oracle's Pillar Axiom which scales I/O, storage capacity and performance independently using smaller intelligent nodes. EMC is using VMAX engines, NetApp with its GX (Spinnaker) architecture, and even IBM is going that way.

All these trends, and it is not so important really in what chronological order they happened or that I took some examples from leaders in their fields, clearly indicate convergence of technological threads.

I often hear from clients that Exadata is too new, uses strange Infiniband bits and has no link to a SAN array. Well clearly the entire industry is moving that way. Customers are indicating with their voices what they would like to have - capability and simplicity for the workloads that drive their revenue.

Why is this important for the CIO?

CIOs are typically confronted with a range of technologies to solve a limited array of challenges. They are constantly asked by the business and more recently CFOs to make sure that they are:

  • using future proofed technologies,
  • simpler vendor management,
  • focus investment on those activities that support revenue streams,
  • align IT with the business!

Well Engineered Systems are exactly all that. Oracle literally went back to the drawing board and questioned why certain things were done in certain ways in the past and what direct benefit that provided clients.

Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.

Oracle, I believe, has at least a 2 year advantage in that they:

  • learnt from the early stages in the market,
  • fine-tuned their offerings,
  • aligned with support requirements of such dense capability blocks,
  • helped customers come to grips with such a cultural change
  • is continuing to add to its "magic sauce" and still engineering the best of commodity hardware to further increase the value add of Engineered Systems.

The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business challenges.

As a CIO it is important to recognise the value that Engineered Systems bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

Engineered Systems provide the ability for IT to transform itself providing directly relevant Business Services.

It is not a general purpose approach where the IT organisation can hope for transformation - Engineered Systems enable and weaponise the datacenter to directly fulfill  requirements expressed by the CIO team through intense constant dialogue with business leaders!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Inflexibility of Datacenter Culture - 'The way we do things round here' & Engineered Systems

With a focus on large enterprise and service provider datacenter infrastructures, I get the chance to regularly meet with senior executives as well as rank and file administrators - top-2-bottom view.

One of the things that as always struck me as rather strange is the relative "inflexible" nature of datacenters and their management operations.

As an anecdotal example, I recall one organization with a heavy focus on cost cutting. At the same time the datacenter management staff decided that they would standardize on all grey racks from the firm Rittal. Nothing wrong here - a very respectable vendor.

The challenge arising was:

  • The selected Rittal racks at that time were around 12,000 Euro each approximately
  • The racks that came from suppliers such as Dell, Fujitsu, Sun etc were around 6,000 Euro each

See the problem? 50% savings thrown literally out of the window because someone wanted all grey racks. When we are talking about a couple of racks - that is no big deal. With say 1000 racks we are looking at over-expense of 6 million Euro - before anything has been put into those racks!

Upon asking why it was necessary to standardize the racks, the answers I got were:

  • independence from server manufacturers
  • create rack and cabling rows before servers arrive to facilitate provisioning
  • simpler ordering
  • perceived "better-ness" as enclosures is a Rittal specializ

Sounds reasonable at first glance - until we see that racks are all engineered t support certain loads, and typically optimized for what they will eventually contain. Ordering was also not really simpler - as the server vendors made that a "no brainer". Perception of quality was not validated either - just a gut feel.

The heart of the problem as came out later was that the datacenter would benefit from having everything homogenous. Homogenous = standardized for datacenter staff.

The problem with this is that datacenters are not flexible at all, they have a focus on homogeneity and ultimately cost lots of money to the business financing them.

In an age where flexibility and agility means the literal difference between life and death for an IT organization, it is incumbent on management to ensure that datacenter culture allows the rapid adoption of competitive technologies within the datacenter confines.

Standardized does not mean that everything needs to be physically the same. It involves the processes o dealing with change in such as way that it can be effected quickly, easily and effectively to derive key value!

I indicated the recent trend of CIOs reporting to CFOs and this would have provided financial stewardship and accountability in this case - getting staff and managers to really examine their decision in the light of what was best for the organization.

Questioning "Datacenter Wisdom"

The focus on homogenous environments has become so strong that everything is being made to equate a zero-sum game. This happens in cycles in the industry. We had mainframes in the past, Unix, Linux (which is Unix basically for x86), Windows - and the latest is VMware vSphere and all x86!

Don't get me wrong here - as a former EMC employee - I have the greatest respect for VMware and indeed the potential x86 cost savings.

What is a concern is this is translated to "strategy". In this case the approach has been selected without understanding why! It is a patch to cover profligate past spending and hoped that magically all possible issues will be solved.

After all - it is homogenous. All x86, all VMware, all virtualized. Must be good - everyone says it is good!

See what I mean - the thinking and strategizing has been pushed to the side. Apparently there is no time to do that. That is really hard to believe, as this is one of those areas that fall squarely into the CIO/CTO/CFO's collective lap.

There are other approaches, and indeed they are not mutually exclusive. Rather they stem from understanding underlying challenges - and verifying if there are now solutions to tackle those challenges head-on.

Why is this important for the CIO?

At a time of crisis and oversight, it is incumbent on the CIO to question the approach put on his/her table for Datacenter Infrastructure transformation.

The CIO has the authority to question what is happening in his/her turf.

At countless organisations, I have performed strategic analysis of macro challenges mapped to the IT infrastructure capability of an organization to deal with those changes. Time and again, when discussing with the techies and managers (who were from a technical background but seem to struggle with strategy formulation itself) it was shown that the marginal differences in technologies were not enough to justify the additional expenditure - or that there were other approaches.

Engineered Systems, in Oracle parlance, are one such challenge. They do not "fit" the datacenter culture. They can't be taken apart and then distributed into the slots that are free in racks spread over the datacenter.

From a strategy perspective, a great opportunity is not being exploited here. Engineered systems, such as Exadata, Exalogic, SPARC SuperCluster, Exalytics,  and Oracle Data Appliance represent the chance to change the Datacenter culture and indeed make the whole datacenter more flexible and agile.

They force a mindset change - that the datacenter is a housing environment that contains multiple datacenters within it. Those mini-datacenters each represent unique capabilities within the IT landscape. They just need to be brought in and then cabled up to the datacenter network, power, cooling and space cabilities of the housing datacenter.

There are other assets like this in the datacenter already - enterprise tape libraries provinding their unique capability to the entire datacenter. Nobody tries to take a tape drive or cartridge out and place it physically somewhere else!

Engineered Systems are like that too. Taking Exadata as an example. This is clearly assembled and tuned to do database work with the Oracle Database 11gR2 platform. It is tuned and tweaked to do that extremely well. It breaks down some of the traditional barriers of datawarehouse and OLTP workloads and indeed allows database workloads to be "stacked".

Taking the idea of what a datacenter really should be (facilities for storing and running IT infrastructure) and being flexible - Exadata should literally be placed on the floor, cabled to the main LAN and power conduits and the database infrastructure platform is in place. After that databases can be created in this mini-Datacenter. The capability is literally available immediately.

Contrast this with creating lots of racks in rows where it is not certain what will be in those racks, put VMware everywhere, add lots of SAN cabling as I/O will always be an issue - and then spend ages in tuning performance to make sure it works well.

The CIO should identify this as a waste of time and resources. These are clever people who should be doing clever things for the benefit of an organisation. It would be similar to the idea of buying a car to drive around in OR getting the techies to buy all the components of the car and trying to assemble.

This loss of value inherent in the Exadata when taking a x86/hypervisor route to creating many virtual virtual machines running each a separate database make no real sense in the case of databases.

The CIO can use valuable organizational knowledge gained over many years regarding the functions the business needs. If, as in this example, it is the ability to store/retrieve/manage structured information at scale - the answer should literally be to bring in that platform and leverage the cascading value it provides to the business.

Neither x86 or a certain operating system is "strategically" relevant - in that it is a platform - and the normal DBAs can manage this using tools they already know. This mini-datacenter concept can be used in extremely effective ways and supports the notion of continuous consolidation.

CIOs can get very rapid quick-wins for an organization in this way. Datacenter infrastructure management and strategy should be considered in terms of bringing in platforms that can do their job well with software and hardware well tuned to run together. Further, they should reduce "other" assets and software that is needed.

Exadata does with not needing a SAN switch/cabling infrastructure - it encapsulates paradigms of virtualization, cloud and continuous consolidation. This will drive deep savings and allow value to be derived rapidly.

Challenge datacenter ideas and culture in particular. Agility requires being prepared to change things and being equipped to absorb change quickly!


The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.