Integra Networks, a Plurilock Company

Learn More
1

Get in touch with us

Address

2733 Lancaster Rd, Suite 220
Ottawa, Ontario,
K1B 0A9

Office hours

Workdays at
9:00am – 6:00pm
Call us
+1 866 657 7620
+1 613 526 4945

Let’s get connected

Get in Touch

Modern data centers are becoming more and more complex, and there are several reasons for this trend. First, there is a business reason behind any technical enhancement, and lately the paramount business requirement is to shrink as much as possible the so called “time to market.” 

People have become used to the quick deploy time provided by virtualization—reducing provision times from weeks and months to days or hours. This has led to the more general concept that any new workload should be immediately available upon request. Cloud-like technologies have further pushed this concept by adding additional elements to the data center such as“infinite scalability” and“ self-service.” 

Infinite scalability is more of a perception than a real situation. Beneath different and always increasing layers of abstraction there is always a physical infrastructure and, regardless of all the provisioning enhancements that latest technologies have allowed (for example hyperconvergence), a system cannot scale infinitely and in a short period of time.

What this concept means is that the “perception” of a user’s “cloud-like” platform is that it is a system that can scale without any apparent limit, regardless of the amount of resources requested. 

The second concept is “self-service.” Once there’s an “infinitely scalable platform” to be consumed, a consumer should not be forced to follow a strict provisioning process. This has been used in the past by infrastructure managers with the guarantee of respecting the defined standards and maintaining a tight control on the finite resource to be sure that no additional request could exhaust those resources. But if the underlying platform can scale without (apparent) limits, the user could be easily and safely entitled to consume additional resources without asking for them in advance. 

Still, some sort of governance needs to be implemented. Workloads should be configured as defined by IT administrators, regardless of who is deploying them. Antivirus, monitoring, backup, patching, every new workload needs to have all the same characteristics of the other ones. 

But the complexity and growth of these data centers have made manual control of these parameters basically impossible, or at least highly inefficient and prone to errors. If there are thousands of virtual machines, it’s pretty certain some of them will be skipped during a patching cycle, some will not be added to the monitoring platform, or they will never be backed up. 

This scenario has led infrastructure administrators to introduce control and management mechanisms to their data centers to cope with the “new way” of consuming resources. 

First, better and more effective monitoring and capacity planning. Offering on the frontend infinite scalability and self-service, means the backend needs to be carefully designed, planned to scale from the beginning to avoid dangerous forklift upgrades, and most of all, monitored so that administrators can spot in advance trends in resource consumptions and decide for the acquisition and deployment of additional resources in time. 

We’ll examine automation and policy-based management further in Part II of this two-part series.