Virtualization is the modern way to maximize IT resources. Virtualization can apply to applications, servers, storage, and networks and is the single most effective way to reduce IT expenses while providing users better access to systems from wherever they are working. With virtualization, applications are contained in virtual machines (VMs) which are isolated from each other, but share a pool of resources managed by a hypervisor.
But without the proper planning your virtualization project could be destined to fail.
There are many virtualization hypervisors in the marketplace today, from the big players such as VMware and Microsoft, to the open source Linux built systems. All these platforms were created to allow the end user to create and run virtual machines within the user's environment. An important factor to consider when selecting which hypervisor to deploy comes down to price versus dependability/reliability, where the goal of the virtualization design should be to provide high availability for mission critical applications.
The first step in any virtualization architectural project is to understand what the cost of downtime would mean to the business. The higher the cost of downtime, the more robust the design should be. All projects must begin with a thorough understanding of the company’s current environment and the expectations or vision of the company's leaders.
There are many details to consider when designing a virtualization infrastructure.
Physical Equipment
Networking
Backups
|
Virtual Environment
Licensing
|
1) Information Gathering
The best way to gather the required information about the existing environment is with a scanning tool such as SysTrack Virtual Machine Planner, Microsoft Assessment and Planning Toolkit, or VMware’s Capacity Planner. These tools will capture information such as server host names, operating systems, the number and type of CPUs, and storage devices deployed, as well as provide CPU, memory, and network load metrics.
The goal here is to gather accurate metrics about the current environment that will help determine the proper design/configuration of a virtual implementation.
Equally as important as this technical information, you need to determine what is the vision, or end goal, from the perspective of the company's leadership team. You need to discuss costs, interview users, and pinpoint the project requirements.
2) Acceptable Risk Versus Budget
Often the acceptable risk and the budget are at odds with one another. Most businesses are not willing to incur the cost associated with their vision of a robust and scalable computing environment.
There is often no easy answer to this issue and compromises must be made. Companies must factor in the myriad costs associated with licensing, hardware/infrastructure upgrades, monitoring software, and staff retraining.
3) RTO & RPO
Let's define the terms recovery time objective (RTO) and recovery point objective (RPO), as they are often misused
These objectives should be considered when implementing any new virtual environment. The virtualization design should plan for hardware failures and build in enough resources so the failure of one piece of hardware does not bring the business to a halt (RTO). Planning for hardware failures is the first step in building a resilient system, but what would happen if your system lost data? A backup solution should be put in place to accommodate the company's RPO. (Learn more in "5 Questions to Ask About Managed Backup.")
You can improve the reliability and scalability of your computing infrastructure by using Corserva's virtualization design services. Our team is extremely knowledgeable about the latest virtualization technologies, and we have extensive experience leading virtualization design projects for companies looking to optimize their infrastructure. Contact us today.