Private Cloud

Datacenter Rack-scale Capability Architecture (RCA)

For the last couple of years there has been a resurgent theme cropping up regarding disaggregation of components within a rack to support hyperscale datacenters. Back in 2013 Facebook, as founder of the Open Compute Foundation, and Intel announced their collaboration on future data center rack technologies.

This architecture deals basically with the type of business that Facebook itself is running and as such is very component focused such that compute, storage and network components are disaggregated across trays, trays being interconnected with a silicon photonic internal network fabric.

This has the advantage for hyperscale datacenters of modularity allowing components such as CPU to be swapped out individually as opposed to the entire server construct. Intel, presenting at Interop 2013, had an excellent presentation on the architecture outlining various advantages. This architecture is indeed being used by Baidu, Alibaba Group, Tencent and China Telecom (according to Intel).

This in itself is not earth shattering, but seems to lack the added "magic sauce". As it stands this is simply a re-jigging of the arrangement of the form factor but in itself does not do anything to really enhance workload density outside of the consolidation of large numbers of components in the physical rack footprint (i.e. more core, RAM, network bandwidth).

Principally it is aimed at reducing cable, switch clutter, associated power requirements and upgrade modularity, essentially this increases the compute surface per rack. These are fine advantages when dealing with hyperscale datacenters as they represent considerable capital expenditure as outlined in the Moor Insights & Strategy paper on the subject.

Tying into the themes in my previous blog regarding the "Future of the Datacenter" there is a densifying effect taking place affecting the current datacenter network architecture as aptly shown in the Moor study:

Examining this architecture, the following points stand out:

  • Rack level architecture is essential in creating economies of scale for hyper-scale and private enterprise datacenters
  • East-West traffic is coming to front-and-center whilst most datacenters are still continuing in North-South network investment with monolithic switch topologies
  • Simply increasing the number of cores and RAM within a rack does not itself increase the workload density (load/unit)
  • Workload consolidation is more complex than this architecture indicates utilizing multiples components at different times under different loading
  • Many approaches are already available using an aggregation architecture (HP Moonshot, Calxeda ARM Architecture, even SoCs)

There is a lot of added value to be derived for an enterprise datacenter using some of these "integrated-disaggregated" concepts, but competing with and surviving in spite of hyperscale datacenters requires additional innovative approaches to be taken by the enterprise.

Enterprises that have taken on board the "rack as a computer" paradigm have annealed this with capability architecture to drive density increases up to and exceeding 10x over simple consolidation within a more capable physical rack:

  • General purpose usage can be well serviced with integrated/(hyper)converged architectures (e.g. Oracle's Private Cloud Appliance, VCE Vblock, Tintri, Nutanix)
  • Big Data architectures use a similar architecture but have the magic sauce embedded in the way that the Hadoop cluster software itself works
  • Oracle's Engineered Systems further up the ante in that they start to add magic sauce to the hardware mix and the software smarts – hence engineered rather than simply integrated. Other examples are available from Teradata, Microsoft and its appliance partners)
  • In particular, the entire rack needs to be thought of as a workload capability server:
    • If database capability is required then everything in that rack should be geared to that workload
    • In-Platform (in the database engine itself) capability used above general purpose virtualization to drive hyper-tenancy
    • Private networking fabric (Infiniband in the case of Oracle Exadata and most high-end appliances)
    • Storage should be modular and intelligent, offloading not just storage block I/O but also being able to deal with part of the SQL Database Workload itself whilst providing the usual complement of thin/sparse-provisioning, deduplication and compression
    • The whole of database workload consolidation is many times the sum of parts in the rack
  • The datacenter becomes a grouping of these hyper-dense intelligent capability rack-scale servers
    • Intelligent provisioning is used to "throw" the workload type onto the best place for doing it at scale, lowest overall cost and still deliver world-class performance and security
    • Integrate into the overall information management architecture of the enterprise
    • Ensure that as new paradigms related to Big Data Analytics and the tsunami of information expected from Internet-of-Things that they can be delivered in the rack scale computer form but with additional "smarts" to further increase value being delivered as well as provide agility to the business.

The enterprise Datacenter can deliver value beyond a hyper-scale datacenter through thinking about continuous consolidation in line with the business not just IT needs that need to be delivered. Such platform and rack-scale capability architecture (RCA) has been proven to provide massive agility to organizations and indeed prepares them for new technologies such that they can behave like "start-ups" with a fast-low-cost-to-fail mentality to power iterative innovation cycles.

Opportunities for the CIO

The CIO and senior team have a concrete opportunity here to steal a march on the Public Cloud vendors by providing hyper-efficient capability architectures for their business in re-thinking the datacenter rack through RCA paradigms.

Not only will this massively reduce footprint in the existing premises and costs, but focuses IT on how best to serve the business through augmentation with hybrid Cloud scenarios.

The industry has more or less spoken about the need for hybrid Cloud scenarios where private on-premise cloud is augmented with public cloud capabilities. Further today's announcement with regards to the EU making "Safe Harbour" data treaty effectively invalid should put organizational IT on point about how to rapidly deal with these changes.

Industry thinking indicates that enterprise private datacenters will shrink, and the CIO team can already ensure they are "thinking" that way and making concrete steps to realize compact ultra-dense datacenters.

A hyper-scale datacenter can't really move this quickly or be that agile as their operating scale inhibits this nimble thinking that should be the hallmark of the CIO of the 2020s.

In the 2020s perhaps nano- and pico-datacenters may be of more interest to enterprises as way of competing for business budgetary investment as post-silicon graphene compute substrates running at 400GHz room become the new norm!


Storage Intelligence - about time!

I was reading recently an article about Backblaze releasing storage designs. This is a 180TB NAS device in 4U! Absolutely huge! A 42U rack would be able to have around 1.8Petabyte in a single rack.

Blog-pod30-header

When thinking about Petabytes, one thinks about the big players in storage, EMC/NetApp/HDS, selling tens of storage racks covering substantial parts of the datacenter floor space and offering a fraction of this capability.

Vmax_images Fas_index

Clearly, the storage profile of what the large monolithic enterprise arrays offer is different. However, Backblaze clearly highlights the ability to get conventional "dumb" storage easily and at low cost! Packing some flash  cache or SSD in front would already bring these boxes to the same I/O capacity;-)

This makes the case that storage per se is not really a challenge anymore. However, making storage aid in the overall performance equation; making sure that storage helps in specific workload acceleration is going to be critical going forward. Basically Intelligent Storage!

Questioning "Accepted Wisdom"

Many IT shops still think of storage as a separate part of their estate. It should simply store data and provide it back rapidly when asked - politely. The continuing stifling of innovation in datacenters due to having a single answer for all questions - namely VMware/hypervisors and server virtualization - tends to stop any innovative thinking that may actually aid an organisation to accelerate those parts of the application landscape leveraging revenue.

Some questions that came to mind and also echoed by clients are:

  • Disk is cheap now. SSD answers my performance needs for storage access. Is there something that together with software actually increases the efficiency of how I do things in the business?

  • For whole classes of typical applications, structured data persistence pools, web servers etc what would "intelligent" storage do for the physical estate and the business consumers of this resource?

  • How can enterprise architecture concepts be overlaid to intelligent storage? What will this mean to how future change programmes or business initiatives are structured and architected?

  • How can current concepts of intelligent storage be used in the current datacenter landscape?

We are seeing the first impact of this type of thinking in the structured data / database world. By combining the database workload with storage and through software enablement we get  intelligent acceleration of store/retrieval operations. This is very akin to having map-reduce concepts within the relational database world.

Further combining storage processing, with CPU/RAM/Networking offload of workload specific storage requests, facilitatest unprecedented scale-out, performance and data compression capabilities.

Oracle's Engineered Systems, the Exadata Database Machine in particular, represents this intelligent storage concept, amongst other innovations, for accelerating the Oracle database workload.

These workload specific constructs foster security of precious data assets, physically and logically. This is increasingly important when one considers that organisations are using shared "dumb" storage for virtual machines, general data assets and application working data sets.

In the general marketplace other vendors (IBM PureSystems + DB2, Teradata, SAP HANA etc) starting to use variations of the technologies for intelligent storage. The level of maturity varies dramatically, with Oracle having a substantial time advantage as first mover.

2013-2015 will see more workload focused solutions materializing, replacing substantial swathes of datacenter assets built using the traditional storage view.

Why is this important for the CIO, CTO & CFO?

Intelligent workload-focused storage solutions are allowing CIO/CTOs to do things that were not easily implemented within solutions based on server virtualization technology using shared monolithic storage arrays - dumb storage - such as in the VMware enabled VCE Vblock and HP CloudSystem Matrix - which are effectively only IaaS solutions.

Workload specific storage solutions are allowing much greater consolidation ratios. Forget the 20-30-40 Virtual Machines per physical server. Think 100s of workloads per intelligent construct! An improvement of 100s of percent over the current situation!

It is important to verify how intelligent storage solutions can be a part of the CIO/CTO's product mix to support the business aspirations as well as simplify the IT landscape. Financing options are also vastly simplified with a direct link between business performance and physical asset procurement/leasing:

  • Intelligent storage removes architectural storage bottlenecks and really shares the compute/IO/networking load more fully.

  • Intelligent storage ensures those workloads supporting the business revenue generating activities are accelerated. Acceleration is linked to the cost of underlying storage assets. As cost of NAND flash, SSDs and rotating disks drop, more is automatically brought into the storage mix to reduce overall costs without disrupting the IT landscape.

  • Greater volumes of historic data are accessible thanks to the huge level of context sensitive, workload-specific data compression technologies. Big data analytics can be powered from here, as well as enterprise datawarehouse needs. This goes beyond simple static storage tiering and deduplication technologies that are unaware of WHAT they are storing!
  • Workload-specific stacking supports much higher levels of consolidation than simple server virtualization. The positive side effects of technologies such as Exadata include the rationalization of datacenter workload estates in terms of variety, operating systems can be rationalized and generally have net-net healthier estate. This means big savings for the CFO!

Intelligent storage within vertically engineered workload specific constructs, what Gartner calls Fabric Based Infrastructure present a more cogent vision of optimizing the organizational's IT capability. It provides a higher level of understanding how precious funding from CFOs is invested to those programmes necessary for the welfare of the concern.

CIO/CTOs still talking about x86 and server virtualization as the means to tackle every Business IT challenge would be well advised to keep an eye on this development.

Intelligent storage will be a fundamental part of the IT landscape allowing effective competition with hyperscale Cloud Providers such as Google/Amazon and curtailing the funding leakage from the business to external providers.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

The Shape of Things to Come!

A lot of what I do involves talking with thought leaders from organizations keen to transform how they do business. In some cases, they espouse thoughts moving along general industry lines or marketing. However, in some cases, there is real innovative thought taking place. I believe firmly innovation starts with questioning the current status quo.

We are bombarded by Intel x86 as the ultimate in commodity processor offering everything one could possibly imagine on the one side, and public cloud on the other hand as the doom of in-house IT centers. It is incumbent on all in this industry to think beyond even the cloud as we know it today.

Questioning "Datacenter Wisdom"

This blog entry in entitled the Shape of Things to come with a clear series of ideas in mind:

  • System-on-a-Chip (SOC) are getting very powerful indeed. At what point are these so powerful that they represent the same order of magnitude as an entire hyperscale datacenter from Google or Amazon with a million machines inside?

  • Why does in-house IT have to move out to the cloud? Why could hyperscale clouds not be built up from capacity that organizations are already putting in place? This would be akin to the electricity grid as the transport for capacity created from multiple providers. Borrowing capacity could be done in-industry or across-industries.

  • Why is there disaggregation of all components at a physical datacenter level (CPU, RAM, storage, networking etc) rather than having assembly lines with appliances/constructs hyper-efficient at a particular task within the enterprise portfolios of services and applications?

  • Why are servers still in the same form factor of compute, memory, networking and power supply? Indeed why are racks still square and datacenter space management almost a 2-dimensional activity? When we have too many people living in a limited space we tend to build upwards, with lifts and stairs to transport people. Why not the same for the datacenter?

I 'm not the only one asking these questions. Indeed, in the industry the next wave of physical manifestation of new concepts is taking place albeit slowly. I wanted to share some industry insight as examples to whet the appetite.

  • At Cornell University a great whitepaper on cyclindrical racks using 60GHz wireless transceivers for interconnects within the rack show a massively efficient model for ultrascale computing.
  • RoundWirelessServerack

  • Potentially the server container would be based on a wheel with servers as cake slice wedges plugged into the central tube core. Wheels would be stacked vertically. Although they suggest wireless connectivity, there is no reason why the central core of the tube could not carry power, networking and indeed coolant. Indeed the entire tube could be made to move upwards and downwards - think tubes kept in fridge like housings (like in the film Minority Report!)
  • MinorityReport

  • One client suggested that CPUs should be placed into ultracooled trays that can use the material of the racks as conductors and transport to other trays full of RAM. We do this with hard disks using enclosures. Indeed Intel does 3D chip stacking already!
    • Taking the Intel 22nm Xeons with 10 cores or indeed Oracle's own SPARC T5 at 28nm and 16 cores as building blocks
    • A 2U CPU tray would allow say 200 such processor packages. This is an enormous capability! For the SPARC T5 this would be 3200 cores, 25600 threads and 11Thz of aggregate power!
    • Effectively, you could provide capacity on the side to Google!
    • A RAM tray would basically allow you to provide 20TB+ depending on how it is implemented (based on current PCIe based SSD cards).
  • Fit-for-purpose components for particular workloads as general assembly lines within an organization would fit in well with the mass-scale concepts that the industrial and indeed digital revolutions promoted.
    • If we know that we will be persisting structured data within some form of relational database, then why not use the best construct for that. Oracle's Engineered Systems paved the way forward for this construct.
    • Others are following with their own engineered stacks.
    • The tuning of all components and the software to do a specific task that will be used for years to come is the key point!

So the technical components in this radical shake up of the datacenter are materializing. We haven't even started to talk about some of te work happening in material science providing unparalleled changes in CPUs (up to 300GHz at room temperature) or even non-volatile RAM totally replacing spinning disk and possibly SSD and DRAM.


Why is this important for the CIO, CTO & CFO?

Customers typically ask whether they should move everything out to cloud providers such as Google/Amazon or private cloud hosters such as CSC/ATOS/T-Systems. Well looking at the nexus of technological change that is almost upon us, I would say that at some level it might make sense to evaluate the mix of on-premise and off-premise resource.

The Cloud is effectively a delivery model - some applications such as email clearly can be in the public cloud - bearing in mind privacy issues. However the capabilities needed for an organization to thrive as expressed in Enterprise Architecture in order to exploit market forces can be expressed in other ways.

  • Server virtualization relies on workloads not taking all the resources of a physical server. You should be questioning why the software, the most expensive components, is not being used to its maximum? Solving server acquisition costs does not reduce costs for you in a meaningful way.

  • Entertain the idea that with acceleration at every level of the stack, information requests may be serviced in near-realtime! The business should be asking what it would do with that capability? What would you do differently?

  • Datacenter infrastructure may change radically. It may well be that the entire datacenter is replaced by a tube stacked vertically that can do the job of the current football field sized datacenter. How can you exploit assembly line strategies that will already start to radically reduce the physical datacenter estate? Oracle's Engineered Systems are one approach for this for certain workloads, replacing huge swathes of racks, storage arrays and network switches of equipment.

  • Verify if notions of desktops are still valid. If everything is accessible with web based technologies, including interactive applications such as Microsoft Office, then why not ensure that virtual desktops are proactively made obsolete, and simply provide viewing/input devices to those interactive web pages.

  • Middleware may well represent a vastly unexplored ecosystem for reducing physical datacenter footprints and drastically reducing costs.
    • Networking at 100+Gbps already enables bringing your applications/web powered effective desktops with interaction to the users' viewing devices wherever they are.
    • Use intra-application constructs to insulate from the technical capability below. Java applications have this feature built-in, being cross platform by nature. This is a more relevant level of virtualization than the entire physical server.

  • Security should be enabled at all layers, and not rely on some magic from switch vendors in the form of firewalls. It should be in the middleware platforms to support application encapsulation techniques, as well as within pools of data persistence (databases, filesytems etc).
Enterprise architecture is fueling a new examination of how business defines the IT capabilities it needs to thrive and power growth. This architecture is showing the greater reliance on data integration technologies, speed to market and indeed the need to persist greater volumes of data for longer periods of time.

It may well be incumbent on the CIO/CTO/CFO to pave the way for this brave new world! They need to be already ensuring that people understand what is impossible now, technically or financially, will sort itself out. The business needs to be challenged on what it would do in a world without frontiers or computational/storage limitations?

If millions of users can be serviced per square round meter of datacenter space using a cylindrical server tube wedge/slice - why not do it? This is not the time for fanatics within the datacenter that are railroading discussions to what they are currently using - or to provide the universal answer "server virtualization from VMware is the answer, and what is the question?".

Brave thinking is required. Be prepared to know what to do when the power is in you hands. The competitive challenges of our time require drastic changes. Witness what is happening in the financial services world with traders being replaced by automated programs. This requires serious resources. Changes in technology will allow this to be performed effortlessly with the entire stock market data kept in memory, and a billion risk simulations run per second!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by my current employer and does not necessarily reflect the views and opinions of my employer.

A Resurgent SPARC platform for Enterprise Cloud Workloads (Part 2) - SPARC T5

Some time ago, I blogged about the resurgence of the SPARC platform. The then newly designed SPARC T4 was showing tremendous promise in its own write to be able to take up its former mantle of being an innovation leader running extreme workloads with the Solaris 11 operating system.

Indeed, it was used as the driving engine of the SPARC Supercluster for dealing with not just massive acceleration of Oracle database workloads using the Exadata Storage Cell technology, but the ability to combine firmware embedded near-zero overhead virtualization concepts for electrically separate logical domains, carving up the physical hardware, and Solaris zones which allow near-native "virtual machines" sharing an installed Solaris operating system.

Up to 128 virtual machines (zones) supported on a system - a vast improvement over the 20-30 one gets under VMware-like hypervisors typically!

This welcome addition to the wider Oracle engineered systems family allowed the missing parts of the datacenter to be consolidated - these being typically glossed over or totally skipped when virtualization with VMware-like hypervisors was discussed. Customers were aware that their mission critical workloads were not always able to run with an x86 platform which was then further reduced in performance using a hypervisor to support large data set manipulation.

Well the rumor mills have started as the run up to Oracle Openworld 2012 at the end of September. One of the interesting areas is the "possible" announcement of the SPARC T5 processor. This is interesting in its own right as we have steadily been seeing the SPARC T4 and now the T5 having ever greater embedded capability in silicon to drive database consolidation and indeed the entire WebLogic middleware stack together with high-end vertical applications such as SAP, EBusiness Suite, Siebel CRM and so on.

Speculating on what "rumors" and the Oracle SPARC public roadmap, I'd like to indicate where I see this new chip making inroads in those extreme cloud workload environments whilst maintaining the paradigm of continuous consolidation. This paradigm that I outlined in a blog in 2010 is still very relevant - the SPARC T5 providing alternative avenues than simply following the crowd on x86.

Questioning "Datacenter Wisdom"

The new SPARC T5 will have, according to the roadmap the following features and technologies included:

  • Increasing System-on-a-Chip (SOC) orientation providing ever more enhanced silicon accelerators for offloading tasks that software typically struggles with at cloud scale. This combines cores, memory controllers, I/O ports, accelerators and network interface controllers providing a very utilitarian design.
  • 16 cores from the T4's 8-core. This takes them right up to the top end in core terms.
  • 8 threads per core - giving 128 threads of execution per processor providing exceptional performance for threaded applications such as with Java and indeed the entire SOA environment
  • Core speeds of 3.6GHz providing exceptional single threaded performance as well as the intelligence to detect thread workloads dynamically (think chip level thread workload elasticity)
  • Move to 28nm from 40nm - continuous consolidation paradigm being applied at silicon level
  • Crossbar bandwidth of 1TB/s (twice that of the T4) providing exceptional straight line scaling for applications as well as supporting the glueless NUMA design of the T5
  • Move to PCIe Generation 3 and 1TB/s memory bandwidth using 1GHz DDR3 memory chips will start to provide the means of creating very large memory server configuration (think double-digit TB of RAM for all in-memory workload processing)
  • QDR (40Gbps) Infiniband private networking
  • 10GbE Public networking
  • Database workload stacking becomes even more capable and effective than simple hypervisor based virtualization for datacenter estate consolidation at multiple levels (storage, server, network and licensed core levels)

This in itself at the processor level is really impressive, but the features that are on the roadmap aligned to the T5 possibly are the real crown jewels:

  •  on-die crypto accelerators for encryption (RSA, DH, DSA, ECC, AES, DES,3DES, Camellia, Kasum) providing excellent performance through offloading. This is particularly relevant in multi-tenant Cloud based environments
  • on-die message digest and hashing accelerators (CRC32c, MD5, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512) providing excellent security offloading. Again particularly relevant in multi-tenant environments
  • on-die accelerator for random number generation
  • PCIe Generation 3 opens the door to even faster Infiniband networking (56Gbps instead of the current 40Gbps - with active-active links being possible to drive at wire speed)
  • Hardware based compression which will seriously reduce the storage footprint of databases. This will provide further consolidation and optimization of database information architectures.
  • Columnar database acceleration and Oracle number acceleration will provide extremely fast access to structured information. Further, when combined with in-memory structures, the database will literally be roaring !

Indeed when we think that the Exadata Storage cells will also be enhanced to support new chip generations, flash density as well as other optimizations, the next SPARC Supercluster (which has the embedded Exadata storage cells) will literally be one of the best performing database platforms on the planet!

To ignore the new SPARC T5 (whenever it arrives) is to really miss a trick. The embedded technology provides true sticky competitive advantage to anyone that running a database workload or indeed multi-threaded applications. As a Java platform, middleware and SOA platform as well as vertical application platform, the enterprise can seriously benefit from this new innovation.

Why is this important for the CIO & CFO?

CIOs and CFOs are constantly being bombarded with messages from IT that x86 is the only way to go, that Linux is the only way to go, that VMware is the only way to go. As most CFOs will have noted by now:

  • Financially speaking - the x86 servers may have been cheaper per unit, but the number of units is so large to get the job done that any financial advantage that might have been there has evaporated!
  • Overall end-2-end costs for those services that the CIO/CFO signed off on are never really well calculated for the current environment.
  • Focused investment on those activities that support revenue streams and those technologies that will continue to do that for at least the next decade with capacity upgrades of course
  • There must be other ways of doing things that make life easier and more predictable

Well Engineered Systems with the new SPARC T5 represent a way for the CIO/CFO to be able to power those projects that need investment which in turn drive revenue and value. The ability to literally roll the SPARC SuperCluster or any other Engineered System is going to be instrumental in:

  • Shortening project cycles at the infrastructure level
    • don't lose 6 months on a critical ERP/CRM/Custom application project in provisioning hardware, getting unexpected billing for general infrastructure layers such as networking that have nothing to do with this project, IT trying to tune and assemble, getting stuck in multi-vendor contract and support negotiations etc.
    • That time can be literally worth millions - why lose that value?
  • Concentrate valuable and sparse investment strategies literally to the last square meter in the datacenter!
    • If that next project is a risk management platform, then IT should be able to give exactly to the last datacenter floor tile the resources that are needed for that one project alone and the cost
    • Project based or zero-budgetting will allow projects to come online faster, predictably, reuse of existing platforms dealing with the load as well as supporting continuous workload consolidation paradigms
    • Finance enterprise architecture projects that put in the enabling conditions to support faster turnaround for critical revenue focused/margin increasing project activity
Engineered systems are already using the technologies that the rest of the industry is trying to re-package to meet the challenges customers are facing now and in the coming years.The lead is not just in technology but also the approach that customers are demanding - specific investments balanced with specific revenue generating high-yield business returns.

As a CIO it is important to recognize the value that Engineered Systems and the SPARC platform, as part of an overall datacenter landscape, bring in addressing key business requirements and ensure an overall simplification of the Datacenter challenge and large CAPEX requirements in general.

As Oracle and others proceed in acquiring or organically developing new capabilities in customer facing technologies, managing exabyte data sets it becomes strategically important to understand how that can be dealt with.

Hardware alone is not the only answer. Operating systems need to be able to deal with big thinking and big strategy as do applications and the hardware. By creating balanced designs that can then scale-out a consistent effective execution strategy can be managed at the CIO/CTO/CFO levels to ensure that business is not hindered but encouraged to the maximum through removing barriers that IT may well have propagated with the state of the art many years ago.

Engineered Systems enable and weaponize the datacenter to directly handle the real-time enterprise. High-end operating systems such as Solaris and the SPARC processor roadmap are dealing with the notions of having terabyte datasets, millions of execution threads and thousands of logical domains with hundreds of zones (virtual machines) each per purchased core.

Simply carving up a physical server's resources to make up for the deficiencies of operating system/application in dealing with workloads can't be an answer by itself. This is what is also fueling the Platform-as-a-Service strategies partly. How to get systems working cooperatively together to deal with more of the same workload (e.g. database access/web server content for millions of users) or indeed different workloads spread across systems transparently is the question!

High performance computing fields have been doing just this with stunning results albeit at extreme cost conditions and limited workloads. Engineered systems are facilitating this thinking at scale with relatively modest investment for the workloads being supported.

It is this big thinking from organizations such as Oracle and others, who are used to dealing with petabytes of data, and millions of concurrent users that can fulfill  requirements expressed by the CIO/CTO/CFO teams. If millions of users needing web/content/database/analytics/billing can be serviced per square meter of datacenter space - why not do it?

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

The Practical Cloud - Road to Cloud Value Derivation (CVD)

After a change of residence and a change of job - it is high time to write another blog!

In talks with many customers and indeed feedback on this blog site, I receive a lot of indication that the "current Cloud" the pre-2011 industry was marketing was simply tackling the infrastructure side of things. Much of the focus was on consolidating the x86 server estate and delivering features such as migration of live virtual machines across physical servers.

While this is fine as the initial steps to deriving some form of value - it is typically too little. Many business leaders and IT managers indicate that "we have bought into the Cloud, we have virtualized and we can even offer a level of VM automation in terms of provisioning" - "so where is this taking us and where can I highlight the ongoing value to the business?".

This is a very valid line of questioning. The client has millions of $$bucks$$ of equipment sitting on the floor, they have attended training and have done everything they were told to do. The result - they can create a virtual machine with software in minutes as opposed to hours or days. Cool!

That is not a lot to show the business for all that investment and evangelism. This is typically (and incorrectly) lauded as a solution and a great win for all!

IaaS alone, in my opinion, was always too little value.  The approach of simply putting servers, storage, network with a thin veneer of hypervisor magic has limited value in itself. This was incidentally the main haunt till 2011 for mainstream hypervisor purveyors.

This type of datacenter transformation using pre-assembled hardware for the sole purpose of consolidating x86 is too simple and let's face it - too dumb. Clients are cleverer than that. Clients have persisted in following the virtualization wave, and that is good. They have somewhat resisted the Cloud marketing till now as it was simply focused on replacement of their existing hardware and hypervisor stack.

Towards the tail end of 2011 we started seeing a stronger focus on provisioning enterprise software and environments - DB as a Service (DBaaS) which was nothing more than installing a database instance on a virtual machine through a browser provisioning page. Well that is better - but still does not smack of value! Indeed, if you want many big instances of databases with say 64 virtual CPUs per VM you were out of luck! AND yes there are customers that do this!

In 2011, we started to see the emergence of the appliance. This was an entire hardware and software stack that was factory installed. In some cases, such as the EMC GreenPlum appliance, this was built using the components with functional tuning to undertake the task. Others such as Oracle with Exadata Database Machine (which has been around since 2008 incidentally - but first used Sun intellectual property acquired in 2010) not only took the idea of virtualization but actually embedded it into all the components in the stack.

Through innovation, integration, best-of-breed technology and the simple idea that a system should do what it is designed for to the best of its ability, Exadata represents, in my opinion, a new approach to transformation that makes real business impact.

I am sure that during 2012 we will see a move away from the generalized Cloud stacks, such as VCE VBlock, Dell prepackaged servers with VMware installed and something similar from HP Virtualsystem for VMware. These systems are all focused at helping the hypervisor - in this case VMware vSphere, perform its job well. However, the hypervisor only lets you manage virtual machines! It does not do anything else!

That is also the reason that I see the move away from expensive hypervisor software solutions towards open source solutions or systems having the hypervisor embedded as a functional technology to support an enterprise software stack - with no $$ for the hypervisor per se.  

The Race to Business Value

One of the issues that has been stagnating business value derivation through Cloud technologies has been the lack of business as a driving stakeholder. Business should be  driving the IT roadmap for an organisation. Business defines what it wants from developers in the form of functionality. Why not the same for IT infrastructure?

You see the value of Business is that it thinks differently. Business tends to think at levels of enterprise architecture holistically as a driver and motor for business value generation! They think frameworks and they think (with developers and architects) in terms of enabling software platforms upon which to further their unique selling points.

The real Cloud value to be derived in that case is based on the software Cloud platforms leveraged to facilitate global service/application delivery with quality of service baked in. These platforms in turn are used to create further value!  

The real business case for the Cloud comes in the form of Platform-as-a-Service (PaaS). I think that Exadata hits this nail on the head. I don't just want to be able to setup a virtual machine running the database inside, I want the functionality of the database itself! Exadata delivers just that through a clever blend of components!

Why is this important for the CIO?

CIOs have set the agenda for Cloud in 2010-2011. They have seen that it has an effect on the delivery of IT services - but not necessarily a direct impact in the culture of the business or indeed the value the business derives. The early gains have been achieved, and it is time to move on to business focused IT.

CIOs look beyond the mainstream hype. They verify through intensive research and peer-level networking the effect of IT strategies on business value. The CIO pioneers and sets the agenda for deep intelligent consolidation. Not just doing more with less - BUT gaining greater business insight and leverage with fewer more effective resources!

Exadata, and engineered systems of that ilk, with embedded technology are paving the way for scale-up/out with extremely high performance and gathering in the benefits/innovations of the IT industry over the last years e.g. unified networking with Infiniband, high performance SSD storage, deduplication, compression, tiered value-oriented storage, big data capable file systems and indeed open source.  

That is a very potent mix, and Oracle customers are actively leveraging this. They have been using Linux and Oracle Solaris 11 to support those enterprise workloads needing that level of reliability and speed. They have been consolidating hundreds of database and middleware servers - yes - hardware, mixed OSs, non-x86 systems, licenses, management tools, script frameworks and so forth. This is real consolidation!

Further, they have used the well respected Oracle 11g enterprise capable platform to power their Java applications, drive the backend of their middleware platforms, created new value by delivering through the Exadata platform applications to the mobile space (iPads, Androids, Browsers, OS independent applications). 

Indeed, if the Java virtual machine (JVM) is one of the ultimate forms of virtualization, it makes perfect sense that as a business which has elected to use that technology you create the underlying infrastructure AND platform ecosystem to support those efforts at scale.

The Corporate Cloud Strategy can be dramatically refreshed and aligned with the ability to deal with all data needs in a single well managed platform. Exadata provides in this case the ability to deal with all database needs that an organisation has from the smallest to the largest. It provides significant front-end direct value. 

Other Exasystems have started to arrive to deal with specific challenges such as big data and middleware. These use the same magic source of Exadata Database Machine, but are tuned/enhanced for their specific functions. Deep lasting transformation can be achieved and the very nature of these Exasystems means the Business must be included as a principal stakeholder - they can truly see what the value of extracting a business insight means in hard $$ terms!

Look out for these paradigms that directly affect business value and indeed allow new business insight to be gained by easily manipulating petabytes of information in near-realtime! They provide the ability for the business to rapidly come to market with new products, support directly application developers, are built on industry-proven technologies - and best of all - retain the key know-how of your developers and DBAs - they will be up and running with little change to their operational routine!    

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Oracle and does not necessarily reflect the views and opinions of Oracle.

Time to Consider Scale Up in Virtualized Environments?

The recent announcement from AMD of their 16-core Opteron 6200 CPU and indeed Intel with their 10-core Xeon E7 indicates a resurgence of the scale up mentality. Indeed the virtualization bandwagon is partially responsible for fueling this rise.

While on the one hand we have every virtualization vendor touting server consolidation and datacenter efficiency using simple scale-out models based on x86 technology, it is also apparent that to get full efficiency, the density of virtual machines to physical hosts (VM:hypervisor host) needs to increase.

Pack in licensing and transformation costs, and it becomes increasingly difficult to create business cases that really make sense for an organization to invest in new hardware to take the best out of virtualization and ultimately the cloud - unless that very high density can be achieved.

Add to this the every increasing core counts still in the pipeline from Intel/AMD and challengers to the throne such as ARM with their 64-bit ARMv8 architecture. Dell, HP and indeed Google are tinkering with or have produced server designs that utilize ARM technology. The instant draw here is their much better power and thermal profile compared to Intel and AMD. With figures such as 2W per core being bandied around and up to 128 cores in a package, ARMv8 System-on-a-Chip (SoC) designs are nothing to be sneezed at.

That potentially opens up server designs with a huge number of cores within a single server chassis - an eight socket ARM design might have up to 1000+ cores in a 5U design at 2kW! Scale-up with power efficiencies. Sure we can't compare ARM directly to x86 high-end processors, but for strategic thinkers, this should be on their radar.

 

Why is this important for the CIO?

Many of the clients that I meet are constantly talking about scale-out, and indeed this is also a valid model for datacenters and applications with key advantages. However, much of this is also coming from the mass market marketing.

I believe that CIOs need to look beyond the mainstream hype. Scale-up has some very serious advantages, particularly in handling certain intensive workloads coherently, such as databases, analytics and indeed big data.

The old rules regarding scale-up are gradually being rewritten - by lowered licensing costs, commodity components, low powered CPU design, faster networks such as Infiniband and 40/100GbE. Combine this with the strong Linux adoption in the enterprise space and we have an old concept going through a rejuvenation cycle.

The Corporate Cloud Strategy will certainly benefit from a multi-pronged approach ensuring that compute power is where it is needed for the appropriate workloads. Indeed, from a sourcing side, many non-Intel processors have multiple foundries that can deliver the product - reducing reliance on a single vendor/foundry approach.

Those same scale-up systems will be able to run potentially far greater numbers of applications that are themselves already encapsulated (such as Java based applications running in their own JVM). Indeed, as core counts increase, these same trusty workhorses could benefit from a constant workload consolidation approach - ever more VMs/applications running on the same server footprint - with a simple processor change.

My last series of posts have been focused on making sure good old common sense, targeted at application-value delivery, is not forgotten. Older rationale should not just be discarded, but combined effectively with newer paradigms, such as Cloud, for dramatic effect in datacenter and application transformation programmes.

Intel would have you believe that x86 has successfully removed the need for any alternative processor architecture than theirs, even citing evidence in the form of older defunct legacy processors such as Alpha - hold on - did Intel not buy all Alpha intellectual property in 2001?. Well that is one way of getting rid of competition, although that same Alpha IP cropped up in later Intel designsCrying.

New disruptive platforms such as ARM (128 cores, 2W per core, SoC design), or indeed rejuvenated platforms such as SPARC T4 (8 threads per core, 8 cores) all offering either virtualization fo operating systems and/or applications should definitely be on the enterprise infrastructure architecture agendaWink.

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Competition Heating up for VMware - Can one still char$$ge for a Hypervisor?

In all areas of the market, we are beginning to see hypervisors cropping up. Each have various pros and cons. However, the cumulative effect on the virtualization market is important to note for strategists. Even the new mobile virtualization platform of VMware is being directly attacked by Xen/KVM on ARM.

Although VMware is constantly touted as being at least somewhere between 2-4 years ahead of the market. I don't agree with that point of view - as time and competitor movements in the wider market are not linear in their behaviour.

Indeed even the stronghold of the Intel-VMware platform is being gradually encroached upon by ARM and AMD. They are building similar extensions in silicon as Intel, with the added advantage of open source hypervisors gaining traction in a development community that is many times the size of VMware's potentially.

It is a sobering thought that even VMware originally started out from academia and that the hypervisor itself is based on Linux. Academia is now firmly behind open source hypervisors. Once the likes of Red Hat, Citrix and Oracle start weighing in with their development expertise, and in Oracle's case with the added advantage of tuned hardware systems, it will be interesting to see if VMware is still the only game in town.

 

Why is this important for the CIO?

CIO's balancing the long term view against short term tactical needs, need to understand that the when one is looking at becoming Cloud capable, that VMware is not the only solution. The idea of "good enough" should be a strong motivator for product and solution selection.

Indeed, the CIO and team, would be well advised to verify if the savings they are expecting really will be delivered by a near-commodity hypervisor that has strong license costs versus the organisational need to be cost efficient, and tap into the marketing value of the cloud.

Interestingly, in a more holistic sense, the fact that open source hypervisors are continuing their trend in being available on every imaginable hardware platform, including mobile, is in itself a strategic factor. New challengers to Intel and AMD are cropping up, and indeed platforms that had faded into the background over 2009/2010 are surging ahead in 2011-2012 for high end enterprise workloads - as mentioned in the blog "A Resurgent SPARC Platform for Enterprise Cloud Workloads ".

The Corporate Cloud Strategy will certainly benefit from this type of thinking. It will highlight potential alternatives. Depending on the time horizon that the strategy is valid for, "good enough" may well be enough to realize the competitive advantage that is being targeted.

Certainly learning to adapt your organization for the realities of the cloud world requires time. Innovation built upon such enabling platforms requires not just a focus on the infrastructure but the application development environment and ultimately the software services that are consumed.

Remember it is the applications that deliver advantage. The quicker they are brought to market, and on platforms that allow cost efficiencies and agility, the better for the organization concerned. This in turn is leading to a stronger focus on appliances and engineered systems for enterprise virtualization....that's for another blog I thinkWink.

 

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

A Resurgent SPARC platform for Enterprise Cloud Workloads

Fujitsu has just announced that they have taken the crown in Supercomputer performance breaking past the 10 petaflop barrier. That is over 10 quadrillion operations a second. Seriously fast.

Just when we thought that Intel/AMD and x86 would take over the worldWink this beauty came along. For those interested in the speeds and feeds of the Kei Supercomputer - 22,032 four-socket blade servers in 864 server racks with a total of 705,024 cores!

This is a Supercomputer with specific workload profiles running on there. However, looking at the scale of the infrastructure involved, we are basically looking at multiple large scale Internet Cloud providers literally in this construct.

Traditional Cloud providers may well find themselves with a new competitor, the HPC Supercomputer crowd. Supercomputer are expensive to run, but they have all the connectivity and datacenter facilities that one needs.

Clearly this is a departure from the Linux variants that are currently ruling the virtualization roost like VMware, Citrix with Xen, RedHat with KVM, Oracle VM with Xen (and their Virtual Iron acquisition - one of the largest Xen based Cloud providers). Now we also have Solaris back into the game with its own take on virtualization - Solaris Containers. All of this is probably more focused on enterprise workloads - think BIG DATA, think ERP/Supply Chain/CRM!

 

What does this all Mean for Virtualization and the Cloud?

Currently most thinking for Clouds centers around the marketing of the largest players in the market. Think Amazon, Google for public clouds, and then the extensive range of private cloud providers using underlying technologies based on x86 hypervisors.

Many of the reasons for this scale out strategy with virtualization was centered around getting higher utilization from hardware as well as gaining additional agility and resiliency features.

High end mainframes and high end Unix systems have had resiliency baked in for ages. However this came at a price!

The Solaris/SPARC union particularly within large supercomputer shops provides an additional player in the market for enterprise workloads that still need scale-up and scale-out in parallel. This is clearly not for running a Windows SQL server, or a Linux based web server.

However, massive web environments can be easily hosted on such a construct. Large intensive ERP systems could take benefit, providing near-realtime information and event-response capabilities. One could easily imagine a supercomputer shop providing the raw horsepower .

As an example, the recent floods in Thailand are causing a huge headache for disk drive shipments worldwide. Linking an ERP system with big data analytics regarding the risk to supply chains based on weather forecast information as well as actual current events might have allowed a realignment of deliveries from other factories. That simulation of weather and effect on risk patterns affecting supply can certainly be performed in such a supercomputer environment.

 

Why is this important for the CIO?

When thinking about the overall Corporate Cloud Strategy, bear in mind that one size does not fit all. x86 virtualization is not the only game in town. A holistic approach based on the workloads the organization currently has, their business criticality and their ability to shape/move/transform revenue is the key to the strategy.

An eclectic mix of technologies will still provide a level of efficiency to the organization that a simple infrastructure-as-a-service strategy can not hope to reach.

Simply sitting in a Cloud may not be enough for the business. Usable Cloud capacity when needed is the key. This provides real agility. Being able to outsource tasks of this magnitude and then bring the precious results in-house is the real competitive advantage.

Personally, I am not quite sure that enterprises are quite ready to source all their ICT needs from a Public Cloud Provider just yet. Data security issues, challenges of jurisdiction and data privacy concerns will see to that.

That being the case, it will still be necessary for CIO/CTOs to create the IT fabric needed for business IT agility and maintain the 'stickiness' of IT driven competitive advantage .

Keep a clear mind on the ultimate goals of a Cloud Strategy. Cost efficiency is important, but driving revenue and product innovation are even more critical. A multi-pronged Cloud strategy with a "fit-for-purpose" approach to infrastructure elements will pay in the long run.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

Cloud Security Maneuvers - Governments taking Proactive Role

In a previous blog entitled VMworld 2011 - Practice Makes Perfect (Security), I discussed the notion of preparing actively for attack in cyberspace through readiness measures and mock maneuvers.

This is happening at the level of nations. ENISA in Cyber Atlantic 2011, shows how large groups/blocs of nations are working on not only increasing their capabilities, but practicing in concert to see how global threats can be prevented or isolated in cyberspace.

This is at least as intensive as a NATO exercise. Languages, cultures, varying capabilities, synchronization of Command & Control capabilities as well as reporting and management at national levels.

APTs (Advanced Persistent Threats) will be the target in this exercise. This is a current and relevant threat with credible measures needed urgently. APTs can be used by organized crime or state sponsored attacks to circumvent even the most secure installations - nuclear/military typically. It is critical that measures and controls are in place at a national level.

Hopefully they will also cover the very sensitive area of reporting to the press, organizations that are being targeted or potentially targeted as well as practical measures that everyday folk like you and I can implement quickly and easily. Remember security starts with people!

 

What does this all Mean for Virtualization and the Cloud?

Clouds span organizations, nations, borders and cultures. We need to think in equal if not greater terms when thinking about security. Security in one area does not guarantee the security of the entire cloud or the communities that they serve.

There is of course a fine line in skirting personal privacy rules, in place for very good reasons of personal liberty and democratic thinking, and protection of assets in the Cloud from malicious attacks or just plain stealing of intellectual property.

Governments should also not be excluded. It is equally important that an individual has privacy rights maintained without the threat of big brother from other states or indeed your own government. This is an area that every individual needs to be vigilant against. Controls within Government also need to be available to the individual should there be patent infringement without a court order authorizing surveillance. Even that needs to be double-checked!

This does of course also strengthen the case for private clouds, or at least closed community clouds. This provides another buffer perimeter to attack, and ensures the ability to fence off networks from outside unwanted intruders.

This involves security by design. These measures to be able to isolate Cloud elements as needed, and proactive event triggered responses to security will entail ever smarter tools! The ability to process massive data and web logs in near real-time will power the heart of Automated Cloud Security Response & Tracking.

 

Why is this important for the CIO?

Competitive advantage may not be the only reason for charting a hybrid course for your clouds. Fit for function micro-cloud capabilities (e.g. focused on only providing Database-aaS, or Middleware-aaS) will ensure best in class features, and will ensure that there is an island of Cloud capability with the required security measures within the overall Corporate Cloud Strategy.

General purpose cloud constructs to run standard workloads on x86 platforms will also have their own level of security. This may well be a different defense strategy involved than protecting key structured and unstructured data repositories.

The fact that nation states are working collaboratively for Cybersecurity, provides an ideal opportunity for CIOs to link into that capability. National Cyberdefense will have access to the latest greatest wildest threats through linking into vendor response systems (RSA, Symantec, Trend, Qualsys etc) who are able to gather data from the users of their respective solutions.

Further, the ability to liaise directly with the heads of global organizations providing briefing information, as well as joint public response measures with the media will also enable a "soft landing" effect on global equity markets based on their fear of the effect of a wide-spread cyber attack. I do feel that Government should also provide a level of funding for corporate cyber security to ease the burden. Time will tell on this one!

One size clouds can be dangerous in a world where one needs to design for systems failing or being exposed to insidious attack. Although silos in IT are not the preferred approach, the idea of clear fenced off Cloud areas focused on the type of data they are operating on and their business impact analysis ratings should be seriously on the CIO agenda.

Cost savings may well need to be re-channelled to address your concerns with security. Work with the CSO/CISO to get the funding for securing the business assets. Work with government to have access to greater resources and possibly funding.

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.

VMworld 2011 - Practice Makes Perfect (Security)

During the VMworld2011 conference, the theme of security came up very strongly. Indeed, there were many parallels to the RSA Conference 2011 in Feb/2011 that echoed concerns about "putting all your eggs in one basket".

Many solutions were presented, new innovations from VMware in the form of the vShield family and vertical integration into the RSA enVision tools. However, tools are good, but there are few substitutes for common sense and training.

Within all the sessions, I did not really see anything indicating how indepth Cloud security was to be achieved. Security certifications are mainly focused at awareness of issues pertaining to this theme and some level of descriptive and prescriptive actioning that can be performed within a framework.

Taking an metaphor linked to security, namely defending a country, there are parallels that can be drawn. Typically an army of some sort (SecOps - Security Operations) incorporates the capabilities of the security force, a command and control center for operations (SOC - Security Operations Center).

The army receives training both general and specific for particular engagement types (Security awareness training, Security tool training, System administration tasks such as patching, general awareness of threat levels around the world in cybersecurity terms). The army stays fit and in shape to respond should they be called into action. The army is distributed to ensure response in the correct measure and correct location (layered security distributed throughout a Cloud environment).

Lastly, to keep things short, there are mock trials and joint manoeuvres taking place to keep the training sharp and realistic, to ensure a coordinated knowledgeable response to said threat. This can be done with partners that share a similar set of goals, such as NATO. This is the bit that seems to be missing.

 

What does this all Mean for Virtualization and the Cloud?

In most client engagements I see, there is a lot of talk about security, security tools and so forth, but very little actual practice or manoeuvres that take place. It is necessary that teams responsible for safeguarding an environment have the means and regular practice to engage countermeasures in documented plans at speed into action.
If those plans are automated, then they can be triggered through corresponding events automatically but the knowledge to trigger by hand should also be present and tested regularly.

In speaking with some clients on the floor at VMworld, I raised the idea and it seemed to generate a favourable response. Clients and would-be users of cloud technologies are clamouring for safety, and seek to assuage their fears through buying the next great security software that claims nothing needs to be done, apart from issuing a purchase order!

Let's face it, something does need to be done. Tools do need to be acquired - but as part of an Enterprise Security Architecture (ESA) focused on ensuring all IT supporting the business is safe by design, and kept safe through regular threat update measures.

Regular drills are carried out to ensure that the security controls are in place, and mitigation controls can be called for in extreme situations. In the most extreme cases it is necessary to completely cut off outside connectivity while the threat is forensically investigated and stopped!

To be fair, the number of organisations that actually perform PEN (penetration) testing has really increased. However, that is a means to validate the efficacy of the control measures already in operation or determine what is missing.

I would advocate processes and organisational structures implemented within a Cloud enabled organisation enabling testing and simulation of attacks (mock war games) that allow each and every SecAdmin to be able to block/thwart attacks. Further, tracking attacks to source and procedures for rapidly alerting cyber-authorities & ISPs ensuring damage is minimized and threat reduction measures engaged on a broader scale.

 

Why is this important for the CIO/CSO?

The CIO/CSO have responsibilities to ensure that controls are in place and that those controls can be verified and are ready for inspection from regulatory authorities (including the internal audit & security groups).
In terms of budgeting and ensuring the security of your Private Cloud is as users expect, a cyber-war footing needs to be maintained. This internal Cyber-army should be equipped and trained to ensure security of all assets including the brand value of the company that may be at risk from exposure or data leakage.
Globalization lends an extra lever to ensure this type of rigorous security is in place. The measures should be built-in to the Cloud infrastructure, as well as work in layers around the Private Cloud. SecAdmins should be working with SysAdmins, but there does need to be a clear separation of duties and associated duties.
The Cxx agenda needs to include Cyber Security at Cloud scales into their plans to engender an IT ecosystem where business can thrive. The brand value such initiatives provide enable a sustained competitive advantage to accrue. An Enterprise Security Architecture should be in place with security groups actively taking a role to supporting agility and speed to market - but with safety and with confidence!

Disclaimer

The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.