Guide to virtualization adoption – Part 5

In the last part of this series we completed last phase of our steps towards virtual datacenter.

At this point we already performed an exhaustive capacity planning based on existing and expected workloads, decided which hardware equipment to adopt, and moved part or whole physical machines population inside virtual machines.

Now our efforts will be directed to facing complex challenges of virtual infrastructure management, including mandatory control of physical and virtual resources availability, usage and access; deployment of disaster recovery solutions; provisioning of new virtual machines and other tasks automation; and needful monitoring and reporting of datacenter usage.

Being modern virtualization still a very young market we’ll discover how hard is achieving such tasks, with immature tools and missing solutions, with a particular void in performance analysis and troubleshooting discipline.

Challenges of liquid computing

The very mandatory task of every IT manager, virtualization adopter or not, is management of existing resources.

Tracking physical machines usage, operating systems and products licenses availability, services attainability, helps understanding if purchased assets satisfy demand, and reacting fast when a fault happens.

This operation, which can be very consuming even on small environments, becomes more complex when working with virtual infrastructures.

IT managers have now also to worry about a new class of problems, like efficient and controlled virtual machines deployment, rational physical resources assignment, and in some cases even accountability.

Easiness in creating new virtual machines and their nature of independence from underlying hardware, leads to the idea of liquid computing, where it’s hard to exactly understand where is what.

This property increases risk of so called VM sprawl, a problem we had for in last 5 years with traditional computing, but with a much faster expansion rate.

To avoid it virtualization management tools should provide a reliable security system, where permissions can limit operators’ capability to create new machines, and a strong monitoring system, reporting on allocated but unused resources.

At today just the first one is implemented in most virtualization platforms, usually leveraging virtual infrastructure access with LDAP centralized accounting systems, while administrators still are in big troubles when they need to compute efficiency of virtual datacenters.

Going further, when a new virtual machine has been created the virtual infrastructure manager has the new problem of deciding where it has to be hosted.

As we already saw during the capacity planning phase in fact, virtual workloads should be deployed carefully, considering which already deployed workloads could be complementary, to avoid an overloading of resources.

Here management tools should help, assisting deployment after new virtual machines creation.

The upcoming Virtual Machine Manager from Microsoft for example will offer a rating system for available physical machines, assigning one or more stars to each of them so administrators immediately knows where a new virtual machine fits best.

This scoring system will adapt to the evolving infrastructure, even if sysadmins decided to not follow previous suggestions, so in every moment it provides best advices.

But even with such system, in some environments easiness of virtual machine creation may be not easy enough. For example, big ISP remodelling their hosting offering on virtualization are in need of smart tools to deploy hundreds or thousands of virtual machines on demand, in seconds.

At the moment few 3rd party products can fill all virtualization management holes, and many companies prefer develop in-house solutions instead of spending big money for little flexibility.

In such complex scenarios virtualization management solutions have to offer software development kits (SDK) allowing wide customizations and different degrees of automation.

A wide open programmable interface and a strong support are key selling points here and so far VMware did a pretty good job compared to its competitors.

Last but not least today’s IT managers have to face a very new problem: accountability.

In a medium complexity corporation, several departments may work with virtual machines and share same physical servers, using them in different percentiles during a fiscal year.

When each of these departments has a cost centre on its own, it’s pretty hard tracking who has responsibility for paying underlying hardware.

And even when costs are handled by a single entity inside the company, enforcing controls about who may use physical resources, and how much of them can be requested is very hard.

At the moment a short number of customers are addressing such kinds of issues, doomed to become common problems in few years, but who is already in trouble may want to see IBM offering, which pioneered the segment with its Tivoli add-on called Usage and Accounting Manager.

Multiple platforms, multiple issues

Mentioned needs further increase when a big company has to handle more than one virtualization platform.

In big corporation each department often has autonomy in choosing preferred solutions, even if only one product will be used for production environment.

So IT manager may need to concurrently manage VMware ESX server and Xen at the same time, hoping to leverage control with a single, centralized tool.

Market offering for such tools is multiplexing as request for them rises.

Solutions from IBM, Cassatt, BMC Software, Enomaly and Scalent are at the moment the most popular, but new contenders like Opsware are coming.

In many cases support for multiple virtual infrastructures means IT managers have not to worry about what technology has been used for creating a virtual machine: these tools are able to perform control and, where possible, application migration from a virtual hardware set to another. Something which is otherwise achievable only with dedicated P2V tools.

When choosing one of these super-consoles, it’s critical verifying they leverage existing management tools provided by virtualization vendors. Otherwise return on investment may never come.

This article originally appeared on SearchServerVirtualization.