top of page

Join my subscriber list

- never miss an update

'Hyperconvergence' - What Cisco learned from the industry rush to 1 dot 'Oh' (Part 3

empowering the technology luminary to re-define 'simple' and re-invent 'complete.' - Part 3

 

Minneota is a small Midwestern town in Lyon County, Minnesota. It takes it’s name from the Dakota language that’s native to the people of the Sioux tribes of Indians that inhabited the area at one time, that means “much water.” That stands to reason given the the South Branch of the Yellow Medicine River flows right through the city of about 1,300, that barely registers on a map about an hour east of the South Dakota state line. A better reference would be to say 'about million miles from nothing of the likes of it.'

There are no stop signs or traffic signals on Highway 68, the main thoroughfare, to slow you down, whether you’re coming or going and with a posted speed limit of 30 miles per hour it’s possible to drive from one end of town to the other in about 40 seconds; about the same amount of time it takes you to change your profile picture on Facebook, dislike it and then change it back.

A few years ago I found myself closing a customer deal in the neighboring town of Marshall, Minnesota with a follow-on set of customer visits scheduled for the next day in Fargo, North Dakota. The local guys said the “drive up 68 to 75 would be much more scenic than Interstate 29” and since it was a beautiful sunny day I was sold on the idea of the more time-consuming route given it was about three and a half hour drive to Fargo and we’d closed the deal before noon.

The approach into Minneota takes you to the Farmers Cooperative Company grain elevator at the corner of N. Madison St. My meeting the next day would be to discuss some of my client’s concerns they had regarding the changes taking place in their data center. They, like most, recognized the nature of their key technology investments spanning the past ten years as being detrimental to their absolute need to ease management and reduce operational expenses of their silos of compute, storage and network. ‘Converged Infrastructure’ (CI) was already the most ubiquitous term in IT and the trend to acquire new hardware solutions always began with a conversation that introduced our companies approach to helping our clients solve this antiquated problem. I had pulled to the side of the road to take a photo of the grain elevator, since as a marketing manager I knew the opportunity would come to incorporate the image of the silos into a piece of a collateral that would help tell the story of the modern approach to properly outfitting a data center, but I was already aware of the next wave of technology approaching the beach – it was just too soon to talk about it in Fargo the next day.

In it’s simplest form a CI approach in the data center is centered around the acquisition of tightly integrated systems that offer a common management tool that allows IT to easily install and administrate these optimized assets so that they appear as a pool of additional resources and not a new and separate silo. This is ideal for IT as the traditional approach to keeping pace with the proliferation of business applications and hardware evolution was to deploy a new widget that supported a unique function but required it’s own management interface and incremental everything such as rack space, added power and cooling cost, maintenance and licensing and worst of all, a new potential point of failure that only performed key business functions at unique times of the day rendering it underutilized since it’s function, whether it be compute, storage or something else, sat idle for large chunks of time.

This was a loathsome problem that felt as though it appeared out of nowhere as next generation systems offered better and better performance but who’s return on investment was gauged to be poor since it sat like a silo, separate from other similarly functioning widgets in the data center, idle at times, and adding to maintenance and operation expenses (OPEX) that were on average consuming over two-thirds of an organizations technology budget and could not easily be incorporated into a virtualization strategy that always led with a common management tool.

A converged infrastructure addresses the problem of siloed architectures and IT sprawl by enabling a pooling and sharing of IT resources. So instead of acquiring and dedicating a set of separate resources to a particular computing technology, application or line of business, a converged infrastructure offering delivers a pool of virtualized servers, storage and networking capacity that is shared by multiple applications and lines of business. The result of incorporating CI systems was the realization of modern technical and business efficiencies from the pre-integration of these once siloed technology components at the factory into an offering that can easily be stood up and made available to users quickly. This became the model for on-premise delivery of new hardware technology given the success that cloud computing enjoyed with it’s instant gratification model and the resulting desire for IT to be able to mimic in terms of offering new pools of resources at the speed and pace of new business needs and users.

To get this out of the way - Cloud computing; is a general term for the delivery of hosted services over the internet. When operating 'in the cloud' a company can consume a compute resource, like a virtual machine along with required storage and applications, as a utility - like electricity for example - without having to invest in computing infrastructure in house or on-premise. The pay as you use or pay as you grow model for IT as a service - that is the purest function of cloud computing - is what drives the interest in and influence of Hyperconvergence today.

vmwareinsight.com

VCE, formed by Cisco and EMC with investments from VMware and Intel offer a family of converged infrastructure platform systems called Vblock. By offering pre-integrated Cisco blade servers and networking along with EMC storage, and VMware for server and desktop virtualization, Vblock Systems are purpose-built to support enterprise cloud computing and delivery of IT-as-a-Service.

The promise of a Vblock system is the true essence of CI - Faster completion of IT modernization projects, while reducing uncertainty and risk

Some of the benefits include;

  • Bottom-to-top support for cloud computing and managed IT services delivery, including native support for server and storage virtualization, secure multi-tenancy, and service automation.

  • Simplified stack management and operations, freeing IT staff to focus on innovation and strategic initiatives.

  • Cost savings following from footprint consolidation, higher utilization rates for physical resources, and reduced management time.

  • Unified, accountable technical support and professional services for the entire infrastructure stack from VCE.

To summarize: a dramatic reduction in IT complexity, through the use of pre-integrated hardware with a common set of virtualization and automation management tools, is an important value proposition for converged infrastructure along with the “cloud ready” nature of a new system that combined server, storage and network into a single framework capable of handling enormous data sets that cloud computing can require.

Enter Hyperconvergence; but first - the network;

Since the dot-com bust, and subsequent financial collapse in 2008, corporate IT budgets shrank and it became harder over time for technology vendors to convince them to buy anything with an emphasis on ‘differentiated’ or ‘well engineered’ as the preference slowly became to adopt software-defined hardware in a commodity wrapper. This approach began at the network and Cisco was a leader in offering network administrators the ability control network traffic through policy-enabled workflow automation.

Say What?

The basis of a software-defined network (SDN) is virtualization which has made cloud computing possible and now allows data centers to dynamically provision IT resources exactly where they are needed, and when they are needed - on the fly. To keep up with the speed and complexity of all this split-second processing, the network must also adapt, becoming more flexible and automatically responsive. We can apply the idea of virtualization to the network as well, separating the function of traffic control from the network hardware, resulting in SDN.

With the explosion of mobile devices and content, server virtualization, and the advent of cloud services were among the trends driving the networking industry to re-examine traditional network architectures since legacy networks had serious limitations and their older methods no longer worked in the race to the cloud. As virtualization, cloud, and mobility created more complex environments, networks had to adapt in terms of security, scalability, and manageability. Most enterprise networks, however, rely on siloed boxes and appliances requiring a great deal of manual administration through a native user interface, a key contributor to the condition we already talked about – IT sprawl and complexity. Changing or expanding these networks for new capabilities, applications, or users requires reconfiguration that is time consuming and expensive. In addition, as customers consumed these network devices, a learning curve came into view for their management tools that again was out of focus where OPEX objectives were concerned.

Software-defined networks take a lesson from server virtualization and introduce an abstraction layer separating network intelligence and configuration from physical connections and hardware. In this way, SDN offers programmatic control over both physical and virtual network devices that can dynamically respond to changing network conditions. SDN like CI is an IT trend focused on reducing complexity and administrative overhead while enabling innovation and substantially increasing return on investment.

Shahid Javed; Linkedin

Second - Storage;

Software Defined Storage (SDS) places the emphasis on storage-related services rather than storage hardware. Like SDN, the goal is to provide administrators with flexible management capabilities through programming. Storage for virtualization became a priority because desktop and server virtualization adoption grew rapidly. The goal and aim for IT was to find a way to merge multiple vendor systems, likely siloed, into a single manageable pool of storage available to IT administrators in a more efficient and automated management approach to delivering requirements for more storage through automated policy-based settings, like SDN.

In a traditional pre-SDN world; a new high priority user shows up on the scene and the data center is virtualized so it’s easy enough for IT to provision a new server for their use but not always as easy to provision new storage, depending on what the user’s application requires. In a worst case scenario, it may take IT more than a small chunk of time for a SAN administrator to go through multiple steps in a specific order to create a new LUN to support the workloads of the new user. In an SDN environment, this is automated vs the undesirable task of doing this work manually.

A software-defined data center has a path that leads to automated policy-based management and provisioning that is getting paved by commodity hardware – the kiss of death for hardware adoption as IT vendors once knew it. This is because the simplicity that a software-defined approach offers completely trumps the value of the underlying hardware. Sure the underlying hardware must still be capable, high performing and resilient, the latter more so than ever, but this trend - the precursor to Hyperconverged Infrastructure offerings – is intent on accomplishing it’s goal with a commoditized set of hardware building blocks to maximize it’s ROI. The commodity hardware being number 1; inexpensive and number 2; interchangeable with other hardware of it’s type.

Converged infrastructure as a trend delivered on it’s promise and was then bolstered as IT ramped from there to software-defined network and storage, further decoupling the hardware from the software with advanced capabilities taking advantage of the core benefits of virtualization. As a virtualized data center continually proved itself through greatly reduced OPEX and substantially lowered CAPEX, with the promise of cloud computing coming true in a big way, the stage was set to again do more with less but more importantly to do so without tossing aside the initial investment in all that integrated hardware and siloed sophisticated storage and networking.

In so far as IT vendors were concerned - The question was; who would offer this first and would do it best?

 


© 2016 The Catch Point created with Wix.com

bottom of page