Virtualization Featured Article


Optimizing Multicloud Through Virtualization


November 08, 2018
By Special Guest
Michael Bushong, Vice President, Juniper Networks -

In many ways, the mass migration of enterprises towards cloud and multicloud architectures represents the next wave in the industrialization of IT. 

The industrial revolution began as businesses codified their means of producing goods, allowing for the acceleration of highly-repeatable processes. The ability to repeat key manufacturing tasks in a high-fidelity way meant that processes could be tuned and optimized, leading to breakthrough output that ultimately fueled a period of tremendous growth. 

As cloud and multicloud take root, this dynamic is poised to revolutionize IT in the same way that it did manufacturing.

IT as an Industrial Engine

IT has traditionally played a supporting role in the business. So long as the products and services were aided, but not defined by IT, the best that IT could do was to support the business as efficiently as possible. 

But in modern enterprises, the applications do more than support the business—they are the business. This means that IT is no longer a supporting function. IT services are required to deliver modern products and services, thus shifting IT from the enabler to a critical part of the supply chain. 

In any enterprise, the supply chain serves as the industrial engine. Accordingly, IT services will naturally be subject to the same rigors as any other element. Production must remain uninterrupted, costs must be reliably contained, and quality can never falter. These constraints are especially true as products move through a natural maturation cycle towards commodity offering. 

Repeatability

If reliability is the measure, then repeatability is the mechanism. When viewed through a lens of repeatability, most current IT practices must undergo transformative change.

The price of reliability and repeatability is simplicity. But simplicity is at odds with the common practice of providing bespoke IT services to meet varying needs across the enterprise. This leads to the somewhat uncomfortable conclusion that IT has to change not just the services it offers, but also the means in which those services are provided. 

Both technology and practice play a role in reimagining how IT provides services. The practice, while difficult to execute, is fairly straightforward to understand. The standardization of tools and the streamlining of process is key to any mass effort to simplify. Discipline goes a long way for most enterprises, though the draw of expediency can often render such efforts fruitless.

But technology plays a role, too. And the successful application of specific technologies will likely have an outsized impact on the efforts of the modern enterprise. 

Virtualization as a Lever for Optimization

Virtualization is typically characterized as a means of cost reduction. While it’s true that the virtualization of servers has led to a massive shift in the economics of compute, it has also done much more for infrastructure optimization.  

Virtualization created a separation of concerns that allowed the infrastructure and its applications to be decoupled in such a way that each could be optimized semi-independently. To unlock these optimizations, a class of middleware was created to facilitate lifecycle management, and with this tooling in place, two teams that had previously been tightly bound were freed to innovate on their own. 

  

Virtualization in the Context of Cloud

Knowing the key to supply chain optimization is the architectural decoupling of components, is the current move to cloud leveraging best practices learned through previous evolutions?

In some ways, the answer is a resounding yes. The move to cloud is about tapping into pools of logically centralized but physically distributed resources. But if those resources are accessed via cloud primitives that are tightly bound to specific cloud instances, will there not need to be some future architectural change?

Put differently, the same principles of virtualization ought to be applied to cloud and multicloud. There will need to be overlays that provide abstractions useful in deploying workloads across different clouds. The goal of these abstractions will be to drive reliability, allowing for service uniformity regardless of the underlying infrastructure. Only if services can be repeatedly and reliably deployed can IT be optimized as part of the enterprise supply chain.

What to Watch For

Platforms like Kubernetes are already widely accepted as a means of providing workload lifecycle management across clouds. But there must be similar abstractions across both storage and networking.

In the storage space, companies are already building multicloud data warehouses that allow data to be accessible across different cloud providers. These offerings are, themselves, an abstraction layer that sit between the enterprise and the cloud. In the networking space, multicloud solutions are just beginning to emerge. The key is creating an abstraction between policy and control and the underlying virtual devices required to provide connectivity to and within a cloud. 

Ultimately, operations will be the surest way to know if there is architectural decoupling. If an operational model—for compute, storage, or networking—must change depending on the underlying cloud, then the work to separate is not complete. Without that separation, continued optimization of the IT services supply chain will be unnecessarily impeded


Edited by Maurice Nagle









Click here to share your opinion – Would color of equipment influence your purchasing decision, one way or another?





Featured Blog Entries

Day 4, Cisco Live! - The Wrap

Day 4 was the final day of our first ever Cisco Live! We had a great show, with many great conversations and new connections with existing and potential end users, resellers, partners and job hunters.

Day 3, Cisco Live!

Day 3 of Cisco Live is history! For Fiber Mountain, we continued to enjoy visits from decision makers and influencers who were eager to share their data center and structured cabling challenges.

Day 2, Cisco Live!

Tuesday was Day 2 of Cisco Live for Fiber Mountain and we continued to experience high levels of traffic, with many high value decision makers and influencers visiting our booth. One very interesting difference from most conferences I attend is that there are no titles on anyone's show badges. This allows open conversations without people being pretentious. I think this is a very good idea.

Day 1, Cisco Live!

Fiber Mountain is exhibiting at Cisco Live! In Las Vegas for the first time ever! Our first day was hugely successful from just about any perspective - from quantity and quality of booth visitors to successful meetings with customers.

Industry News