Hyper-V Virtual Machine Virtual Network Adapters Explained   Leave a comment


In Hyper-V, a virtual machine has one or more virtual network adapters, sometimes also called virtual NICs or vNICs. A vNIC connects to a host’s virtual switch. This allows the virtual machine to potentially talk to other virtual machines on the same virtual switch. An external virtual switch has a port that connects to a physical NIC on host. And this allows virtual machines that are connected to an external virtual switch to talk on the LAN, and potentially on the Internet. Note that this all assumes that machines are on the same VLAN, are routed, don’t have firewalls blocking communications, and that other virtual technologies such as Hyper-V Network virtualization or Port ACLs aren’t in the way. In this post I will discuss the types of NICs available and how to add them to a virtual machine.

Types of Virtual NICs

Hyper-V offers two kinds of virtual NICs that can be used in virtual machines – one for the past and one for now.

Synthetic Network Adapter

The first kind is simply known as a “network adapter,” but you can think of it as the synthetic network adapter. The synthetic network adapter requires that the guest OS is Hyper-V-aware; in other words, the child partition is enlightened or it is running either the integration components for Windows or the Linux Integration Services.

Hyper-V will add a single synthetic vNIC into a virtual machine’s specification by default. You can add up to eight synthetic vNICs into a single virtual machine. The below screen shot shows a generation 1 virtual machine with a single synthetic vNIC. Note that this vNIC has a name (VM01). By default, a synthetic vNIC is called “network adapter” when created in Hyper-V Manager or in Failover Cluster Manager. This virtual machine was created using System Center Virtual Machine Manager (SCVMM), so the vNIC was given a label. You can also name vNICs using PowerShell, which can be handy if you do want to create lots of vNICs in a single virtual machine.
Synthetic Network Adapter

A default synthetic network adapter in a generation 1 virtual machine.

Legacy Network Adapter

The second kind is a legacy network adapter. You can have up to four legacy network adapters in a virtual machine. The name suggests the primary focus of this type of vNIC; the legacy network adapter is intended to be used in unenlightened virtual machines that do not have the integration components or Linux Integration Services installed.

A common question on forums is "Why can’t my Windows XP VM connect to a network?" The reason is either of the following must be true:

* The guest OS must be Windows XP SP3 to support the Hyper-V integration components to use the default synthetic network adapter.
* You have replaced the synthetic network adapter with legacy network adapter, and remembered to configure the TCP/IP stack.

Another reason to use the legacy network adapter is that it offers support for PXE network boots. Synthetic vNICs do not have support for PXE in generation 1 virtual machines. The use of virtual machines in System Center Configuration Manager (SCCM) and Windows Deployment Services (WDS) for developing and testing OS deployment is common, so you will find yourself using legacy vNICs quite a bit if using generation 1 virtual machines.
Legacy Network Adapter

Legacy Network Adapter settings in a generation 1 virtual machine.

Hyper-V uses synthetic network adapters for a reason; that’s because they offer more functionality and they offer better performance. Legacy network adapters are less efficient, causing more context switches between kernel mode and user mode on the host processor.

Generation 2 Virtual Machines

The generation 2 virtual machine was added in Windows Server 2012 R2 (WS2012 R2) Hyper-V to give us a new virtual machine virtual hardware specification that was legacy-free. It should therefore come as no surprise that generation 2 virtual machines do not offer legacy network adapters.

Generation 2 virtual machines only support synthetic network adapters. You can have eight of these efficient vNICs in a single generation 2 virtual machine.
There is no need for legacy network adapters in generation 2 virtual machines

Generation 2 virtual machine with many network adapters.

There is no need for the legacy network adapter in generation 2 virtual hardware. Remember that generation 2 virtual machines only support 64-bit edition of Windows 8 or later, and Windows Server 2012 (WS2012) and later. That means we don’t have an issue of enlightenment. Thanks to the new virtual hardware specification, Microsoft was able to add PXE functionality to generation 2 synthetic vNICs.

Advertisements

Posted 06/08/2014 by Petri in VMware

Hyper-V Virtual Machine Virtual Network Adapters Explained   Leave a comment


In Hyper-V, a virtual machine has one or more virtual network adapters, sometimes also called virtual NICs or vNICs. A vNIC connects to a host’s virtual switch. This allows the virtual machine to potentially talk to other virtual machines on the same virtual switch. An external virtual switch has a port that connects to a physical NIC on host. And this allows virtual machines that are connected to an external virtual switch to talk on the LAN, and potentially on the Internet. Note that this all assumes that machines are on the same VLAN, are routed, don’t have firewalls blocking communications, and that other virtual technologies such as Hyper-V Network virtualization or Port ACLs aren’t in the way. In this post I will discuss the types of NICs available and how to add them to a virtual machine.

Types of Virtual NICs

Hyper-V offers two kinds of virtual NICs that can be used in virtual machines – one for the past and one for now.

Synthetic Network Adapter

The first kind is simply known as a “network adapter,” but you can think of it as the synthetic network adapter. The synthetic network adapter requires that the guest OS is Hyper-V-aware; in other words, the child partition is enlightened or it is running either the integration components for Windows or the Linux Integration Services.

Hyper-V will add a single synthetic vNIC into a virtual machine’s specification by default. You can add up to eight synthetic vNICs into a single virtual machine. The below screen shot shows a generation 1 virtual machine with a single synthetic vNIC. Note that this vNIC has a name (VM01). By default, a synthetic vNIC is called “network adapter” when created in Hyper-V Manager or in Failover Cluster Manager. This virtual machine was created using System Center Virtual Machine Manager (SCVMM), so the vNIC was given a label. You can also name vNICs using PowerShell, which can be handy if you do want to create lots of vNICs in a single virtual machine.

Legacy Network Adapter

The second kind is a legacy network adapter. You can have up to four legacy network adapters in a virtual machine. The name suggests the primary focus of this type of vNIC; the legacy network adapter is intended to be used in unenlightened virtual machines that do not have the integration components or Linux Integration Services installed.

A common question on forums is "Why can’t my Windows XP VM connect to a network?" The reason is either of the following must be true:

* The guest OS must be Windows XP SP3 to support the Hyper-V integration components to use the default synthetic network adapter.
* You have replaced the synthetic network adapter with legacy network adapter, and remembered to configure the TCP/IP stack.

Another reason to use the legacy network adapter is that it offers support for PXE network boots. Synthetic vNICs do not have support for PXE in generation 1 virtual machines. The use of virtual machines in System Center Configuration Manager (SCCM) and Windows Deployment Services (WDS) for developing and testing OS deployment is common, so you will find yourself using legacy vNICs quite a bit if using generation 1 virtual machines.

Hyper-V uses synthetic network adapters for a reason; that’s because they offer more functionality and they offer better performance. Legacy network adapters are less efficient, causing more context switches between kernel mode and user mode on the host processor.

Generation 2 Virtual Machines

The generation 2 virtual machine was added in Windows Server 2012 R2 (WS2012 R2) Hyper-V to give us a new virtual machine virtual hardware specification that was legacy-free. It should therefore come as no surprise that generation 2 virtual machines do not offer legacy network adapters.

Generation 2 virtual machines only support synthetic network adapters. You can have eight of these efficient vNICs in a single generation 2 virtual machine.

There is no need for the legacy network adapter in generation 2 virtual hardware. Remember that generation 2 virtual machines only support 64-bit edition of Windows 8 or later, and Windows Server 2012 (WS2012) and later. That means we don’t have an issue of enlightenment. Thanks to the new virtual hardware specification, Microsoft was able to add PXE functionality to generation 2 synthetic vNICs.

Posted 06/08/2014 by Petri in VMware

Creating a NIC Team and Virtual Switch for Converged Networks   1 comment


A common solution for creating a converged network for Hyper-V is to aggregate links using a Windows Server 2012 (WS2012) or Windows Server 2012 R2 (WS2012 R2) NIC team, such as in the below diagram. We will look at how you can do this in Windows Server and then connect a virtual switch that is ready for Quality of Service (QoS) to guarantee minimum levels of service to the various networks of a Hyper-V cluster.

What Is an NIC Team?

A NIC team gives us load balancing and failover (LBFO) through link aggregation:

  • Load Balancing: Traffic is balanced across each of the physical NICs that make up the team (team members) in the NIC team. The method of balancing depends on the version of Windows Server and whether the NIC team is being used for dense virtualization or not.
  • Failover: The team members of a NIC team are fault tolerant. If a team member fails or loses connectivity then the traffic of that team member will be switched automatically to the other team member(s).

NIC teams have two types of setting. The first is the teaming mode:

  • Switch Independent: The NIC team is configured independently of the top-of-rack (TOR) switches. It allows you to use a single (physical or stack) or multiple independent TOR switches.
  • Static Teaming (switch dependent): With this inflexible design, each switch port connected to a team member must be configured for that team member.
  • LACP (switch dependent): Switch ports must be configured to use LACP to automatically identify team member connections via broadcast by the NIC team.

Load balancing configures how outbound traffic is spread across the NIC team. WS2012 has two options (with sub-options) and WS2012 R2 has three options (with sub-options):

  • Address Hash: This method hashes the destinations of each outbound packet and uses the result to decide which team member the traffic should traverse. This option is normally used for NIC teams that will not be used by a Hyper-V virtual switch to connect to a physical network.
  • Hyper-V Port: Each virtual NIC (management OS or virtual machine) is associated at start up with a team member using round robin. That means a virtual NIC will continue to use the same single team member unless there is a restart. You are still getting link aggregation because the sum of virtual NICs are spread across the sum of the team members in the NIC team.
  • Dynamic: This is a new option in WS2012 R2. Traffic, even from virtual NICs, is spread pretty evenly across all team members across all team members.

Note that inbound traffic flow is determined by the TOR switch and is dependent upon the teaming mode and load balancing setting.

Recommended Settings

We’ll keep things simple here, because you can write quite a bit about all the possible niche scenarios of NIC teaming. The teaming mode will often be determined by the network administrator. Switch independent is usually the simplest and best option.

If a NIC team will be used by a virtual switch, then:

  • The NIC team should be used by that single virtual switch only. Don’t try to get clever: one NIC team = one virtual switch.
  • Do not assign VLAN tags to the single (only) team interface that is created for the NIC team.
  • On WS2012, load balancing will normally be set to Hyper-V Port.
  • On WS2012 R2, it looks like Dynamic (the default option) will be the best way forward because it can spread outbound traffic from a single virtual NIC across team members, unlike Hyper-V port.

Creating the NIC Team

Note: If you want to implement the VMM Logical Switch or use Windows/Hyper-V Network Virtualization then you should use VMM to deploy your host networking instead of deploying it directly on the host yourself.

There are countless examples of creating a WS2012 NIC team on the Internet using LBFOADMIN.EXE (the tool that is opened from a shortcut in Server Manager). Scripting is a better option, because a script will allow you to configure the entire networking stack of your hosts, consistently across each computer, especially if you have servers that support Consistent Device Naming (CDN). Scripting is also more time efficient: Simply write it once and run it in seconds on each new host. The following PowerShell examples require that the Hyper-V role is already enabled on your host.

The following PowerShell snippet will:

  • Rename the CDN NICs to give them role-based names. Change the names to suit your servers. You should manually name the NICs if you do not have CDN-enabled hosts.
  • Team the NICs in a switch independent team with Dynamic load balancing.

Rename-NetAdapter “Slot 2” –NewName VM1

Rename-NetAdapter “Slot 2 2” –NewName VM1

New-NetLBFOTeam –Name ConvergedNetTeam –TeamMembers VM1,VM2 –TeamingMode SwitchIndependent –LoadBalancingAlgorithm Dynamic

Converged Networks and Creating the Virtual Switch

In the converged networks scenario you should create the virtual switch using PowerShell. This allows you to enable and configure QoS. The following PowerShell snippet will:

  • Create a virtual switch with weight-based QoS enabled and connect it to the new NIC team
  • Reserve a share of bandwidth for any virtual NIC that does not have an assigned QoS policy. This share is assigned a weight of 50. That would be 50% if the total weight assigned to all items was 100.

New-VMSwitch “ConvergedNetSwitch” –NetAdapterName “ConvergedNetTeam” –AllowManagementOS 0 –MinimumBandwidthMode Weight

Set-VMSwitch “ConvergedNetSwitch” –DefaultFlowMinimumBandwidthWeight 50

What Has Been Accomplished in the Design

Running these PowerShell cmdlets will create a NIC team ready for a Hyper-V virtual switch. It’ll also create a virtual switch that is ready for Hyper-V converged networking

Posted 06/08/2014 by Petri in VMware

Create a NIC Team Inside of a Hyper-V Virtual Machine   Leave a comment


You might wonder why you might want to do it, but it is possible to create a NIC team inside of a virtual machine. In this blog post, I will explain several different reasons to create a NIC team, along with necessary configurations.

Why a Guest OS NIC Team Might Be Required

Normally you will never create NIC teams inside of virtual machines. A best practice is to implement what is illustrated in the figure below. A pair of physical NICs in the host are designated for virtual networking use. These two NICs are ideally plugged into different top-of-rack switches and are teamed in the management OS of the Hyper-V host.

The default team interface is used to connect a virtual switch. Virtual machines are connected to this virtual switch. With this design, your virtual machines get network path redundancy, where you have more than one of each physical device (NIC and top-of-rack switch) to connect the virtual machines to the network core. We don’t need multiple vNICs in the virtual machines or virtual switches in the hosts because these are virtual devices that do not suffer hardware failures.

Reasons to Create a NIC Team Inside of a Guest OS VM

What are legitimate reasons to create a NIC team inside of the guest OS of a virtual machine? There are two that I can think of.

  1. To experiment with NIC teaming
    The first reason is simple enough; you want to experiment or demonstrate NIC teaming, and you do not have any physical machines to work with. I typically will use a virtual machine to show new Windows or Hyper-V administrators how to use LBFOADMIN.EXE to create and configure NIC teams.
  2. To get network path fault tolerance for in-guest services
    The second reason is related to a hardware feature that Hyper-V supports that is incompatible with NIC teaming. Single-Root IO Virtualization (SR-IOV) is a feature that is made possible by SR-IOV capable NICs. SR-IOV directly connects the virtual function (VF) or vNIC of a virtual machine to special physical functional (PFs) on a host’s network card.

This direct connection bypasses the networking features of the host, such as the virtual switch and the host’s NIC team. The result of enabling SR-IOV is that a host’s NIC team offers no network path fault tolerance for the virtual machine. And that means if you want network path fault tolerance for your in-guest services, then you will have to implement NIC teaming within the guest OS.

The host has two SR-IOV capable NICs. A virtual switch is created for each NIC — SR-IOV enhanced traffic will bypass the virtual switch, but the virtual switch is used to map the virtual machine’s vNICs to the correct physical NIC.

The virtual machine is given two virtual NICs, and each virtual NIC is connected to one of the virtual switches. This means that each virtual NIC has a different virtual and physical paths to the network core in the data center. The two virtual NICs are teamed, and this provides load balancing and fail over between the two virtual NICs using a single IP address.

Virtual Machine NIC Team Considerations

There are several different considerations to keep in mind when creating a NIC team inside of a virtual machine:

  • Enable NIC Teaming: A Hyper-V administrator must enable NIC teaming in the settings of the virtual machine.
  • Two virtual NICs: A team in a physical server may contain up to 32 NICs. A team that is created inside of a virtual machine is only supported with two virtual NICs, even though nothing will stop you from creating larger teams.
  • External Virtual Switches: You must ensure that each of the two virtual NICs is connected to a different external virtual switch. Private/Internal virtual switches will leave the team in a disconnected state. Using just one virtual switch is not supported.

Enabling NIC Teaming

The setting for enabling NIC teaming in a guest OS is controlled by the Hyper-V administrator. Edit the settings of the virtual machine, expand the network card that will be a part of the NIC team, and browse into Advanced Features. There you will find a setting called Enable The Network Adapter To Be A Part Of A Team In The Guest Operating System. This setting is disabled by default; check the box to enable this virtual NIC to become a part of a NIC team in this virtual machine’s guest OS.

Enabling this setting can lead to a lot of clicking. PowerShell can be a lot quicker. The following example will enable NIC teaming for all NICs in a virtual machine called VM01.

Set-VMNetworkAdapter -VMName VM01 -AllowTeaming On

Once you enable NIC teaming in the virtual machine settings you will proceed with creating the NIC team as usual in the guest OS. Note that the configuration will be locked down to switch independent and the load balancing will also be locked down. This is the ideal and only possible NIC team configuration within a guest OS.

Posted 06/08/2014 by Petri in VMware

Microsoft Hyper-V 3.0   Leave a comment


Hyper-V 3.0 is the virtualization feature created for the client version of Windows 8 and Windows Server 8. It is offered as a stand-alone product.

Hyper-V 3.0 builds on older versions of Hyper-V which creates a virtualized environment for multiple partitions to run. The hypervisor layer, or the parent partition, enables the management of child partitions. Hyper-V uses the term "partition" to refer to a virtualization layer that provides an independent, isolated environments in which guest operating systems and applications can run.

Distinguishing itself from previous iterations of the hypervisor, Hyper-V 3.0 has an extensible virtual switch, which affords advanced networking features such as extensions that inspect, monitor and sample traffic. Hyper-V 3.0 also offers live storage migration, which, in a step up from Windows Server 2008 R2’s Quick Storage Migration, does not require periods of downtime. It is also capable of migrating virtual machines (VMs) without shared storage.

Hyper-V 3.0 is built to be scalable; it can support more than 32 nodes and 4,000 virtual machines.

Posted 06/05/2014 by Petri in VMware

How to Enable Hyper-V Virtual Machine Processor Compatibility Mode   Leave a comment


Windows Server 2008 R2 (W2008 R2) introduced live migration to Hyper-V. Live migration allows running virtual machines to move from one host to another with no perceivable downtime. There are some boundaries that are enforced by hardware on this type of virtual machine movement. In this article, I will show you how to enable processor compatibility mode in Hyper-V to allow a virtual machine to move between different generations of the same processor family.

How Processors Restrict Virtual Machine Movement

A hypervisor will reveal the capabilities of the physical processor to a virtual machine when that virtual machine boots up. The virtual machine will then use those features to run services. Over the years, AMD and Intel have added features, especially for virtualization, that enhance the performance and security of those virtual machines. Obviously neither Intel nor AMD can back-port their hardware enhancements to already deployed processors, so there is a potential issue. Let’s get something clear first: You cannot live migrate or restore saved virtual machines across different families of processor. This means, for example, that you cannot do the following:

* live migrate from a host with Intel processor to a host with AMD processors
* restore a virtual machine on an Intel host from a saved state created on a host with an AMD processor

The reason is quite simple: The processors are in different families and have completely different instructions and features. There is no way around this. So the advice is simple: Go all-Intel or all-AMD within your expected migration boundaries.

Imagine you have a virtual machine running on Host1. Host1 has a “Generation 1” processor from manufacturer X. You want to live migrate that virtual machine to Host2 that has “Generation 3” processors. The newer processors have a superset of features; in other words, all of the features on the older processor exist or are compatible on the newer processor. The live migration will work because the virtual machine is using features that will still work on the newer host. The newer processor has many more features that cannot be found on the older processor. Say you start another virtual machine on Host2. It will find all those lovely new features of the newer processor that do not exist on Host1. You will not be able to live migrate this second virtual machine to Host1 because the newer processor enhancements are not there to be used by the already running virtual machine. This incapability of features between different generations of a processor family also impacts restoring virtual machines.

Processor Compatibility Mode

We have had the ability since W2008 R2 to enable processor compatibility mode in the processor settings of a virtual machine. This setting will allow you to live migrate or restore a virtual machine across different generations of processor within the same family (Intel to Intel, or AMD to AMD). Enabling processor compatibility is easy. Simply open the settings of the required virtual machine and check the box.

There is a significant price to enabling this feature. The hypervisor will hide all of the advanced processor virtualization enhancements from the virtual machine when it boots up. This will reduce the performance of your services. My suggestion is that this feature should be used as a last resort. Instead, you should always try to keep your processors in the same generation. This is hard to do when you are adding servers over time and manufacturers keep adding new stock. So here are my ideas:

  • When you start a new cloud or host farm, buy the newest processor and server spec that you can afford. This will give you a longer window for buying more of the same spec later on.
  • When you need to add more host capacity, go back to that spec and get more of the same.

This won’t be a complete solution if you’re adding host capacity over multi-year periods because hardware does change. But it will increase the size of your live migration boundaries.

Posted 04/04/2014 by Petri in VMware

What Is Non-Uniform Memory Access (NUMA)?   Leave a comment


Non-Uniform Memory Access (NUMA): Overview

Non-uniform memory access is a physical architecture on the motherboard of a multiprocessor computer. The architecture lays out how processors or cores are connected directly and indirectly to blocks of memory in the machine. Software such as Windows Server or Hyper-V must deal with this physical construction to offer the best possible performance for their services.

The below diagram illustrates a physical computer with two NUMA nodes. The cores and memory of the computer are split between these NUMA nodes. The cores (0-3) in Node 0 have direct access to half of the memory. The cores (4-7) in Node 1 have direct access to the other half of the memory. Note that the cores of each node have indirect access to the RAM in the other node.

Non-Uniform Memory Access illustration

An illustration of NUMA.

If any process running on the cores of node 0 requests memory, a NUMA-aware operating system (such as Windows Server or Linux) or application (such as SQL Server) will do its best to assign RAM from the same node. This is because direct access to RAM offers the best performance. For example, let’s say the above machine is running Hyper-V and a virtual machine is running on the logical processors (LP) in node 0. If that virtual machine requests RAM, Hyper-V will always try to assign RAM from node 0.

On the other hand, if there is no RAM available in the same node, the application, operating system, or Hyper-V will have no choice but to assign RAM from another node. If our virtual machine, running in node 0, needs more RAM, and the only place to get that RAM is node 1, then Hyper-V will assign that RAM by default. There is a penalty in performance of that RAM, but it was assigned.

Have you ever noticed the instructions for your server telling you how to match up RAM with your processors? That’s NUMA in action.

Detecting NUMA Architecture

The NUMA architecture of the most common Hyper-V hosts is usually pretty simple: dual sockets (processors) = 2 NUMA nodes. But there are other ways to tell including:

* Running CoreInfo from Microsoft SysInternals
* Using Performance Monitor to query NUMA Node Memory \ Total MBytes
* Run the Get-VMHostNumaNode PowerShell cmdlet on a Hyper-V host

Hyper-V and NUMA

As you can see above, Hyper-V is designed to optimize how virtual machine processors and memory are assigned to get the best possible performance. There are a few features that have been added over the versions to optimize that performance:

* Disable NUMA node spanning
* Guest-aware NUMA
* Customizable virtual processor NUMA topology (to be covered in a later article)

Disable NUMA Node Spanning

When you enable Dynamic Memory (W2008 R2 SP1 Hyper-V and later) there is a chance that you will have many virtual machines constantly adding and removing tiny amounts of RAM from the host’s capacity. Hyper-V will try to keep virtual machines within their NUMA node, but if that node’s block of RAM is fully used, Hyper-V will have to span nodes and assign RAM via indirect access from another NUMA node. That can impact not only the performance of the virtual machine, but of the entire host.

If you find a drop in performance in a host and can correlate that to NUMA node spanning via Performance Monitor (compare Hyper-V VM Vid Partition : Remote Physical Pages as a percentage of Hyper-V VM Vid Partition : Physical Pages Allocated per virtual machine) then you can disable NUMA node spanning in the host’s Hyper-V settings.

Guest Aware NUMA

Windows Server 2012 (WS2012) allowed us to create virtual machines with more than four (up to 64) virtual CPUs for the first time. However, this created a possible problem: The virtual CPUs (vCPUs) of larger virtual machines would be almost guaranteed to span NUMA nodes. How would the guest OS or guest services that are running in the virtual machine avoid spanning NUMA nodes?

A feature called Guest Aware NUMA or Virtual NUMA was added in WS2012. When you start a virtual machine (it must not have Dynamic Memory enabled) Hyper-V will reveal the physical NUMA architecture that the virtual machine is residing on to the virtual machine. The guest OS of the virtual machine will treat this virtual NUMA layout as if it was the NUMA architecture of a physical computer. The guest OS and any NUMA-aware services will assign RAM from within the virtual machine to align with the NUMA node of the associated processes.

Virtual NUMA is disabled in a virtual machine once you enable Dynamic Memory for that virtual machine.

Posted 04/04/2014 by Petri in VMware