Answered by:
Hyper-V on SAN - real world advice

Question
-
I've been running Hyper-V for a while now. We just got a new SAN. I'm looking for any real-world experience on how one might best configure SAN storage. Do you simply treat a LUN as another drive on the host? Do you setup one LUN per VM? Do you use a LUN as a pass-through disk for the VM?
I'm looking for best-practice advice, taking into account Live Migration potential for Hyper-V R2.
Thanks much,
BlakeTuesday, July 28, 2009 2:27 PM
Answers
-
In the simplest way of thinking - a LUN is a disk that is presented to a system.
The system that is using the LUN uses it as if it is any other disk.
When it comes to SAN storage with HYper-V it depends ont he purpose and type of storage and your desired configuration.
Do you want to use the SAN for VHD / VM storage?
Do you have a VM with high disk IO or a large database that could benefit from having its own storage volume on the SAN?
What type of SAN do you have? (Fiber Channel attach, iSCSI attach, etc.) - this will define your limitations.
The flexability is practically endless - there is any number of combinations that can be derived, depending on your needs and the needs of the workloads that you are running in virtual machines.
Tell us a bit more about the type of workloads that you plan to virtualize and a bit more about hte SAN you envision and I am sure there are community members that would be more than happy to provide some guidance.
Brian Ehlert (hopefully you have found this useful)- Proposed as answer by Nathan LasnoskiMVP Wednesday, July 29, 2009 5:13 PM
- Marked as answer by Vincent HuModerator Friday, July 31, 2009 6:21 AM
Tuesday, July 28, 2009 4:58 PMModerator -
Hello,
This is a great question. Some key notes about this are:
* In Hyper-V RTM you needed to allocated one LUN per VM for Quick Migration and failover in a high availability cluster. In Hyper-V R2, you have Clustered Shared Volumes, allowing you to have more than one VM on a LUN and to do Live Migration. This is immensely simpler, especially if your SAN is iSCSI.
* There are a lot of opinions on how best to configure RAID storage for VMs. I take a "better safe than sorry" approach and usually run RAID 10 on SAN storage supporting VMs. The performance is consistent, you have great storage availability, and its supported on almost every SAN.
* 99% of the time I'm configuring SAN volumes which are exposed to the host system and hold VHDs. I like this because I can (1) achieve near identical performance, and (2) have significantly greater flexibility than pass-through. Pass-through and iSCSI targeted to VMs can have uses, but they are not the norm.
Thanks,
Nathan Lasnoski- Proposed as answer by Nathan LasnoskiMVP Wednesday, July 29, 2009 5:12 PM
- Marked as answer by Vincent HuModerator Friday, July 31, 2009 6:21 AM
Tuesday, July 28, 2009 5:20 PM -
FC or iSCSI, using SAN storage will increase the flexibility, scalability, performance and probably also the complexity of your solution. That's not a warning, just a real-world note. Working with Volume GUIDs can be painful, but necessary when you start scaling a Hyper-V Failover Cluster beyond ~24 drive letters. In the 2008 RTM release, you need to map a single SAN LUN to a single VM. This fits into the Quick Migration approach, where the migration (the running memory state) is written to disk and then ownership of the LUN is cut over to another node.
Server 2008 R2 offers Live Migration and the ability to cut the dependance on single disk/single VM. Live Migration uses a selected network interface to migrate the running state of the VM while pre-staging the VM on another node. Cutover is near instantaneous (sometimes referred to as the "brownout") becuase of this architecture. This leverages the in-box addition of Cluster Shared Volumes. While it doesn't require CSV, you do need a clustered file system of some sort (there are other vendor solutions available).
However, you may still choose to have dedicated LUNs for specific VMs...particularly systems that require high-performance or have lots of disk I/O. The general guidelines from MSFT are roughly 1 Cluster Shared Volume per cluster node and evaluate your results from there.
Hope this helps,
--Ryan
Ryan Sokolowski | MCT, MCITP x3, MCTS x10, MCSE x2, CCNA, CCDA, BCFP- Proposed as answer by Nathan LasnoskiMVP Wednesday, July 29, 2009 5:12 PM
- Marked as answer by Vincent HuModerator Friday, July 31, 2009 6:21 AM
Tuesday, July 28, 2009 7:03 PM
All replies
-
In the simplest way of thinking - a LUN is a disk that is presented to a system.
The system that is using the LUN uses it as if it is any other disk.
When it comes to SAN storage with HYper-V it depends ont he purpose and type of storage and your desired configuration.
Do you want to use the SAN for VHD / VM storage?
Do you have a VM with high disk IO or a large database that could benefit from having its own storage volume on the SAN?
What type of SAN do you have? (Fiber Channel attach, iSCSI attach, etc.) - this will define your limitations.
The flexability is practically endless - there is any number of combinations that can be derived, depending on your needs and the needs of the workloads that you are running in virtual machines.
Tell us a bit more about the type of workloads that you plan to virtualize and a bit more about hte SAN you envision and I am sure there are community members that would be more than happy to provide some guidance.
Brian Ehlert (hopefully you have found this useful)- Proposed as answer by Nathan LasnoskiMVP Wednesday, July 29, 2009 5:13 PM
- Marked as answer by Vincent HuModerator Friday, July 31, 2009 6:21 AM
Tuesday, July 28, 2009 4:58 PMModerator -
Hello,
This is a great question. Some key notes about this are:
* In Hyper-V RTM you needed to allocated one LUN per VM for Quick Migration and failover in a high availability cluster. In Hyper-V R2, you have Clustered Shared Volumes, allowing you to have more than one VM on a LUN and to do Live Migration. This is immensely simpler, especially if your SAN is iSCSI.
* There are a lot of opinions on how best to configure RAID storage for VMs. I take a "better safe than sorry" approach and usually run RAID 10 on SAN storage supporting VMs. The performance is consistent, you have great storage availability, and its supported on almost every SAN.
* 99% of the time I'm configuring SAN volumes which are exposed to the host system and hold VHDs. I like this because I can (1) achieve near identical performance, and (2) have significantly greater flexibility than pass-through. Pass-through and iSCSI targeted to VMs can have uses, but they are not the norm.
Thanks,
Nathan Lasnoski- Proposed as answer by Nathan LasnoskiMVP Wednesday, July 29, 2009 5:12 PM
- Marked as answer by Vincent HuModerator Friday, July 31, 2009 6:21 AM
Tuesday, July 28, 2009 5:20 PM -
This is a normal FC SAN. I am familiar with SANs and familiar with Hyper-V. Just not familiar with the two in the same environment.
I am thinking of setting the data drive of my SQL Server as a pass-through disk directly to a FC LUN. And then simply treat an SATA LUN as another disk on the host and put the VHD files there for most everything else. We will be going to R2 when it comes out, so, as Nathan said, I'm hoping I can do a Live Migration with multiple VMs on the same LUN.
For the SQL Server - I won't be using snapshots anyway, so that isnt' any issue. We backup the systems directly, so I don't care about 'backing up' a VHD of the SQL Server data.
I'm sure everything will be RAID 10.Tuesday, July 28, 2009 5:42 PM -
FC or iSCSI, using SAN storage will increase the flexibility, scalability, performance and probably also the complexity of your solution. That's not a warning, just a real-world note. Working with Volume GUIDs can be painful, but necessary when you start scaling a Hyper-V Failover Cluster beyond ~24 drive letters. In the 2008 RTM release, you need to map a single SAN LUN to a single VM. This fits into the Quick Migration approach, where the migration (the running memory state) is written to disk and then ownership of the LUN is cut over to another node.
Server 2008 R2 offers Live Migration and the ability to cut the dependance on single disk/single VM. Live Migration uses a selected network interface to migrate the running state of the VM while pre-staging the VM on another node. Cutover is near instantaneous (sometimes referred to as the "brownout") becuase of this architecture. This leverages the in-box addition of Cluster Shared Volumes. While it doesn't require CSV, you do need a clustered file system of some sort (there are other vendor solutions available).
However, you may still choose to have dedicated LUNs for specific VMs...particularly systems that require high-performance or have lots of disk I/O. The general guidelines from MSFT are roughly 1 Cluster Shared Volume per cluster node and evaluate your results from there.
Hope this helps,
--Ryan
Ryan Sokolowski | MCT, MCITP x3, MCTS x10, MCSE x2, CCNA, CCDA, BCFP- Proposed as answer by Nathan LasnoskiMVP Wednesday, July 29, 2009 5:12 PM
- Marked as answer by Vincent HuModerator Friday, July 31, 2009 6:21 AM
Tuesday, July 28, 2009 7:03 PM -
The advice to date has been great. Thanks to all.
How can I deal with the 2TB limitation of .vhd files? What are the options?
Thanks
BlakeWednesday, July 29, 2009 2:00 PM -
Hello,
In these cases I've broken up the storage into several smaller disks, rather than use gigantic VHDs. Also, you could use pass-through in a situation where you needed a larger drive.
Thanks,
Nathan LasnoskiWednesday, July 29, 2009 5:08 PM -
I figured those were the options - I'm not sure what the users will need but they have lots of data.
Thanks again
BlakeWednesday, July 29, 2009 5:09 PM -
Personally, I would never create a single VHD of this size.
It is one of those situations that because you can, should you?
This is a perfect fit for SAN storage presented to your VM as a passthrough disk, or the VM directly attaching via iSCSI.
Then you use different backup techniques for the data than you do for the VM itself. Either SAN level backup solutions, or more traditional agent based solutions that run within the VM (as they do on hardware).
Just because you virtualize the workload does not mean that you have to virtualize everything.
You also have to consider restoration of that data, and the time that each method will take.
Brian Ehlert (hopefully you have found this useful)Thursday, July 30, 2009 3:00 PMModerator