Virtualization Featured Article


Public Cloud: The Phases of Your Journey


September 28, 2018

For an efficient cloud-journey plan to be designed, you must consider the transformation of your technology model as the journey evolves. It is always good to remember that the cloud focuses on high availability, reliability, and automation. Topics such as minimal infrastructure, lean teams, flexibility and fast access to cutting-edge technology are among the advantages offered by this technology. And how to know if companies should start this journey?

To support the companies in this decision, the four phases that will be considered when starting a typical journey towards the construction of a truly sustainable information technology infrastructure in the public cloud will be presented.

Phase 1: Replicating Information Technology in the Cloud

Initial cloud deployments focus on trying to replicate their on-premise infrastructure to a cloud platform. Understandably, many think that this is also the end of the process of an extremely familiar platform that is understood inside and out, but also solves one of its most pressing problems: capacity constraints.

Companies can spend years in the first phase, simply expanding outward as demand for processing and storage increases. And because everything is so familiar, the DevOps model allows companies to remain as normal while maintaining the same level of productivity and efficiency they have always liked.

But if a business stays at this early stage of its first cloud strategy, they may find that early successes will not be carried forward into the future.

Phase 2: Rebuilding and Automating

Phase 2 should only be started when the company's financial management begins to ask difficult questions. Although the organization has reduced capital spending on hardware, cloud costs continue to spiral. The problem is that unlimited cloud scalability can cause unforeseen problems for cloud developers and systems engineers with no experience. Developers can make use of as many cloud provider services as they wish, but all are charged accordingly. Which means that many allocate many resources inefficiently or unnecessarily.

The IT department initiates reengineering systems to align them with the various rules by which public cloud costs are calculated. For example, by creating users and groups, companies can control who is allowed to request public cloud resources. Immediately, systems become more efficient in terms of operations and costs.

As their experience with cloud technologies deepens, developers also increase the level of automation used for their hosted systems. As an example, in a return to batch processing principles, virtual machines will run during off-hours to reduce operating costs. The Auto Scaling feature, for example, allows developers to limit the scale of features based on a predefined schedule, ensuring that resources are available according to predictable demand and released outside those hours so that the spend is aligned with the use.

With redesigned systems for the cloud and automation run to streamline operations, the company's financial manager will be much happier when the accounts come back under control. Spending may still increase, but this rationalization process ensures that costs are better contained and fully justified.

Phase 3: Containerization

Although the costs are now under control, the hosted infrastructure is still relatively resource-intensive when it comes to management. At the end of phase 2, the applications still reside in virtual machines installed in a hypervisor, that is, it lies on top of the cloud layer.

Many of these virtualized systems will be defined and forgotten, but this is to minimize the work involved in configuring them at the time of deployment. There is also the reality that large-scale configuration changes will be needed and the information technology team will need to do the work to ensure systems continue to function as expected.

With applications, binaries and libraries, operating system, and a hypervisor installed on top of the host operating system, there are several levels that need to be managed. This is where phase 3 begins, with the goal of simplifying the application architecture, reducing overall management overhead and the use of cloud resources.

By using a container engine installed directly on top of an operating system, developers can reengineer applications without the need for a hypervisor or virtual machine. The code is compiled and run on a container, containing nothing more than the dependencies required for the application. This offers several advantages:

• Container application is lighter, helping to reduce demands on server resources and running costs;

• The application becomes more portable, allowing it to be redistributed to another cloud service with minimal effort;

• Developers can focus entirely on building an application or service, without worrying about the operating system or other secondary factors that complicate the process;

• With fewer layers to manage, administrators and engineers are freed up to focus on other strategic projects.

As a bonus, an additional reduction in demand on server resources also helps reduce costs even more, which means much less hassle for the company's financial management.

With fewer layers to manage, administrators and engineers are freed to focus on other strategic company designs.

Phase 4: Serverless Computing

At the end of phase 3, computing resources were intensively simplified, compressing applications and operating costs, and increasing code portability. Again, the temptation is to assume that the cloud migration process is complete, but there are still more improvements to be made.

At this point, applications are embedded in stand-alone containers, but the platform on which they are built still consumes server resources within the cloud vendor's data center. In many cases, developers are running full virtual machines inside their containers. This is not necessarily a bad thing, but the applications can be more streamlined, which, in turn, reduces the effort and cost of development.

More importantly, by saving an operating system inside the container, the environment maintains the same overhead and administrative overhead. Initial efforts may provide a higher density of virtual machines, but including the operating system in one container does little to reduce maintenance complexity and overhead.

The final stage of first cloud migration is to develop and deploy "serverless" applications. As the name implies, these applications are designed to reduce trust in servers and the cloud as much as possible. Rather than resorting to fully compiled virtual machines and applications, serverless applications are built using cloud libraries hosted on binaries and cloud-hosted cloud platforms provided by public cloud services.

Simply put, serverless applications take the building blocks provided by various online services. Essentially, Phase 4 applications take advantage of "Function-as-a-Service" to reduce costs and overheads.

These applications are still infinitely scalable, leveraging third-party resources when needed, but with minimal use of local resources. By connecting these services in this way, they further increase the speed of development and reduce the level of maintenance required to handle the updates of the services being used.

Time to Evaluate Your Progress

With the first trip to the cloud planned, it is much easier to assess where your company arrived. Many late adopters will still be around the first phase, which means they still have some way to go. Most companies have reached Phase 2, however, and are investigating how to better use the cloud infrastructure to improve their operations.

Cloud computing is a fundamental shift in corporate information technology, and many are still learning to make the most of the benefits provided by the public cloud. The shortage of skills is aggravated by the accelerated pace of Cloud platform development. That's why companies are struggling to reach their goals on the journey to the cloud. Especially when the true cloud strategy continues to be misunderstood.




Edited by Maurice Nagle









Click here to share your opinion – Would color of equipment influence your purchasing decision, one way or another?





Featured Blog Entries

Day 4, Cisco Live! - The Wrap

Day 4 was the final day of our first ever Cisco Live! We had a great show, with many great conversations and new connections with existing and potential end users, resellers, partners and job hunters.

Day 3, Cisco Live!

Day 3 of Cisco Live is history! For Fiber Mountain, we continued to enjoy visits from decision makers and influencers who were eager to share their data center and structured cabling challenges.

Day 2, Cisco Live!

Tuesday was Day 2 of Cisco Live for Fiber Mountain and we continued to experience high levels of traffic, with many high value decision makers and influencers visiting our booth. One very interesting difference from most conferences I attend is that there are no titles on anyone's show badges. This allows open conversations without people being pretentious. I think this is a very good idea.

Day 1, Cisco Live!

Fiber Mountain is exhibiting at Cisco Live! In Las Vegas for the first time ever! Our first day was hugely successful from just about any perspective - from quantity and quality of booth visitors to successful meetings with customers.

Industry News