As I review the technical documentation landscape, it’s apparent that information is generally skewed to the enterprise and high level midmarket space. Although there is information out there for others, covering the very basic of virtualization is often glossed over by documentation.
In this series, I will provide you with ground-up information about virtualization.
Even today, although more workloads run on virtual machines than on physical, there are organizations that have yet to take the virtualization plunge or that have done so only for small workloads. Often, organizations take their first small steps into virtualization when it comes time to replace older servers. In these scenarios, organizations are focused on the cost savings potential of virtualization through the reduction in the need for physical hardware.
This is all well and good, but why does this work? In short, virtualization is all about workload abstraction. Thanks to the hypervisor, which might be VMware vSphere, Microsoft Hyper-V or Citrix XenServer, among others, workloads run on top of the hypervisor-based software layer rather than running directly on the underlying hardware. This abstraction technique makes it simple to shift the thinking from the server to the workload. In other words, IT starts to think less about the hardware necessary to run particular services and more on the services themselves.
There are a number of arguments that I’ve heard over the years about reasons not to virtualize, but most of them can be reasonably refuted when the environment is using the right tools. Here are some of the reasons that some organizations continue to avoid moving too heavily into virtualization.
When a server running a single workload fails, just that workload is affected. However, as more workloads are added to a single hardware device, failure of that device affects an increasing number of workloads. In this thinking, by keeping workloads running separately on separate hardware, the organization is better served from an application availability perspective.
This is definitely old-school thinking, especially when you start to think about some of the benefits that can be had with virtualization. First, mainstream enterprise hypervisor products and the services around them provide significant benefits from an availability perspective. Even for organizations that have just a handful of servers, the introduction of virtualization can have a positive availability impact.
Here’s how this magic works:
Obviously, some applications have their own availability mechanisms, which may include clustering, which would achieve similar goals. However, a hypervisor-based solution means that administrators have the capability to deploy high availability for multiple services in a consistent way, which can make the entire environment easier to manage.
When physical servers are initially provisioned, most administrators configure them to meet the maximum needs that will be incurred in the lifetime of the server. As time goes on, administrators can adjust resourcing by adding and removing memory, disk space and processing power. However, how many organizations actually do that with physical hardware?
In a virtual environment, with the focus shifting from the server to the workload, administrators can instead decide which resources are necessary for an application and, as needs change, can adjust those resource allocations through simple software tools. No more is there a need to crack open a server to add memory. Now, with a few clicks of the mouse, an administrator can add memory, disk and processing power from a shared resource pool.
In short, resource allocation modification to meet changing needs can be accomplished in seconds in a virtual environment.
It’s true that adding a hypervisor layer to an environment requires the addition of a skill set that can manage that environment, but the configuration doesn’t need to be complex or be onerous to implement. In fact, both VMware and Microsoft make it really easy for organizations to dip their toes into the virtualization waters in a way that eases the company into the technology. Ultimately, administrators will come to see that managing a virtual environment doesn’t have to be greatly different from managing the physical one. A server is still a server, after all, even if it’s just a software construct.
To be fair, as organizations seek to add more virtualization-provided capabilities to the environment, the need for an expanded skill set will grow, but to get off the ground doesn’t require massive effort.
If you’re running applications that you feel are too big to run inside virtual machines, then the chances are good that these are critical applications that must stay operational. That said, even large applications are easily accommodated by today’s hypervisors, which scale quite well.
For example, in vSphere 5, a single virtual machine can support up to 32 virtual CPUs, 1 TB of RAM and terabytes of storage. Hosts can support 160 logical CPUs, 2 TB of RAM and up to 2,048 virtual disks. Accordingly, “scale” is not really an issue with virtualization these days.
In fact, given the importance of these large workloads, you may actually benefit from using virtualization to support them as you can then make use of virtualization’s abstraction technologies to improve availability and provide more flexibility in the environment.
While these lines of reasoning may have had merit a few years ago, today’s hypervisors are more than up to the task of running even the most significant workloads and, when used properly, can bring major benefits to the business using them. For organizations that have not yet made a move into virtualization or that are still using virtualization for simple workloads, the time is now to take a more generalized approach.
Small businesses don’t generally have a lot of servers. For argument’s sake, let’s assume that there are four servers in a small environment. There may be a file server, an application server, a database server and a mail server, for example. How would virtualization benefit such a small environment?
If these four workloads are running on individual servers, then each server is probably configured for peak performance. Further, as these servers are replaced, they will be replaced with servers that are sized for peak performance. Finally, if any one server fails, it will take days to recover while the organization awaits new hardware, rebuilds the services and recovers the data from backup.
In a virtual environment, the following would be possible:
Of course, there are some downsides. In this scenario, the organization will need to either build or buy the skill set necessary to operate the environment. This can be done by training internal staff or by hiring a consultant.
Further, there will need to be careful thought given to licensing, both at the hypervisor level and with regard to each individual virtual machine. I’ll discuss licensing in a future part in this series.
Carsten Siemens edited Original. Comment: Pirated Content - see my comment
NOTE: This article was reported as Pirated/Plagiarized Content (content you didn't write) and will be removed. Please do not steal content from others. If you feel we are mistaken, please leave a comment or email tnwiki at Microsoft with a link to this article and with clear and detailed reasons why you own the content or have explicit permission from the author.
Content was taken from: "Managing small virtual environments (Part 1) - The Basics"
Published by Scott D. Lowe on 2 Jan. 2013
www.virtualizationadmin.com/.../managing-small-virtual-environments-part1.html