Export Hyper-V Configuration Using PowerShell   Leave a comment


When Windows 8 and Windows Server 2012 were released, we also received a new PowerShell module. Within this module are many cmdlets that are designed to make it easy to manage Hyper-V hosts and virtual machines directly from PowerShell. Many of these cmdlets and command line versions of functionality that exists within the graphical Hyper-V manager. But sometimes, even these cmdlets may not meet your needs. As a case in point, consider the Export-VM cmdlet. This cmdlet will export a virtual machine to disk including its disk files and snapshots. In other words, a backup.

Using PowerShell to Export a Hyper-V Configuration

I’m assuming that if you are running Hyper-V in a production environment, then you probably have invested in a backup solution. What I want to demonstrate in this article isn’t intended to replace those products, but rather supplement them. If you run a smaller shop, a lab environment, or client Hyper-V on a Windows 8 or later desktop, then this article may be especially handy.

The problem is that when you use the Export-VM cmdlet, you get everything and given the size of the virtual machine hard drives and snapshots, this process may take some time to complete. But perhaps you only want to export the configuration itself? I was working on this problem when I came across someone with this exact issue. He wanted to export the virtual machine configuration so that he could import it later.

The configuration that you see when you run Get-VM or look at a virtual machine settings in Hyper-V Manager are stored in an XML file. The file location is included in the virtual machine object.

The name of the file is the same as the virtual machine’s id.

It is pretty easy to get that file with PowerShell.

The file should exist in a sub-folder called Virtual Machines, but sometimes it is not. There is also a possibility there might be multiple XML files, so I do this to get the full file name.

Next, I need to create the destination folder and copy the xml file to it.


In terms of a quick and easy export or backup that’s all there is to it. But I’m always thinking about what other requirement someone might have. Assuming you might import the configuration file, it might be helpful to provide a new name for the virtual machine. Or remove hard drive references so that if you import on a different Hyper-V host you don’t get ugly errors. Or remove snapshot references. Plus, you most likely want to do this from the comfort of your desktop. So I created a PowerShell function called Export-VMConfiguration.

#requires -version 3.0

Function Export-VMConfiguration {

<#
.Synopsis
Export Hyper-V configuration file.
.Description
This command will export a Hyper-V virtual machine’s configuration file to a new location. You can use this as an alternative to Export-VM which will also export hard drives and snapshots. Sometimes you just want the configuration file.

The command has options to provide a new name, as well as remove hard drive or snapshot references. This can be helpful if you plan on importing the configuration later as the basis for a new virtual machine.

The path must exist and is relative to the computer. The command will create a subfolder for each virtual machine under this path. The path must be a local, fixed drive. You cannot specify a UNC or shared drive.
.Example
PS C:> get-vm chi* -computername chi-hvr2 | export-vmconfiguration -path d:exports -verbose

This command will export, or backup, the configuration files for all virtual machines that start with CHI* to a new location on CHI-HVR2.
.Example
PS C:> Export-VMConfiguration chi-test01 -Path E:exports -Verbose -NewName CHI-TEST2 -RemoveDrives -RemoveSnapshots

This command will get the CHI-TEST01 virtual machine on the local host and export its configuration. The new configuration will reflect a new name and remove existing drives and snapshots.
.Notes
Last Updated: October 13, 2014
Version : 1.0

Learn more:
PowerShell in Depth: An Administrator’s Guide (http://www.manning.com/jones6/)
PowerShell Deep Dives (http://manning.com/hicks/)
Learn PowerShell in a Month of Lunches (http://manning.com/jones3/)
Learn PowerShell Toolmaking in a Month of Lunches (http://manning.com/jones4/)

****************************************************************
* DO NOT USE IN A PRODUCTION ENVIRONMENT UNTIL YOU HAVE TESTED *
* THOROUGHLY IN A LAB ENVIRONMENT. USE AT YOUR OWN RISK. IF *
* YOU DO NOT UNDERSTAND WHAT THIS SCRIPT DOES OR HOW IT WORKS, *
* DO NOT USE IT OUTSIDE OF A SECURE, TEST SETTING. *
****************************************************************

.Link
Get-VM
Export-VM
#>

[cmdletbinding(DefaultParameterSetName="Name")]

Param(
[Parameter(Position=0,Mandatory=$True,HelpMessage="Enter the name of a virtual machine",

ValueFromPipeline=$True,
ParameterSetName="Name")]
[
alias("vmname")]
[
ValidateNotNullorEmpty()]
[
string[]]$Name,
[
Parameter(Position=0,Mandatory=$True,ValueFromPipeline=$True,
ParameterSetName="VM")]
[
Microsoft.HyperV.PowerShell.VirtualMachine[]]$VM,
[
Parameter(Mandatory=$True,HelpMessage="Enter the destination path")]
[
string]$Path,
[
Alias("cn")]
[
Parameter(ValueFromPipelineByPropertyName=$True)]
[
ValidateNotNullorEmpty()]
[
string]$Computername = $env:computername,
[
Alias("rename")]
[
string]$NewName,
[
alias("NoDrives")]
[
switch]$RemoveDrives,
[
alias("NoSnapshots")]
[
switch]$RemoveSnapshots
)

Begin {
Write-Verbose -Message "Starting $($MyInvocation.Mycommand)"
Write-Verbose "Using parameter set $($PSCmdlet.ParameterSetName)"
} #begin

Process {

#create a scriptblock so this can run remotely
$sb = {
[cmdletbinding()]
Param(
[
string]$VerbosePreference
)
#$VerbosePreference = "continue"
if ($using:vm) {
Write-Verbose "Processing VM: $($using:VM.Name)"
$myVM = $using:vm
}
else {
Write-Verbose "Processing $using:name"
#get the virtual machine
Try {
$myvm = Get-VM -Name $using:name
}
Catch {
Write-Warning "Failed to find virtual machine $using:name on $($env:computername)"
}
} #else

if ($myVM) {

foreach ($v in $myVM) {
#proceed if we have a virtual machine object
#create the target configuration path
$config = dir $v.configurationlocation -filter "$($v.id).xml" -recurse | Select -first 1 -ExpandProperty fullname
Write-Verbose "Processing configuration file: $config"
If ($config) {

#define the target path
#use the New name if specified
if ($NewName) {
$vmname = $NewName
}
else {
$vmname = $v.name
}
$destination = Join-path $using:path "$vmnameVirtual Machines"
Write-Verbose "Testing $destination"
if (-Not (Test-Path -Path $destination)) {
#create the folder
Try {
Write-Verbose "Creating $destination"
New-Item -Path $destination -ItemType directory -ErrorAction stop | Out-Null
}
Catch {
Throw
}
} #if destination doesn’t exist

if (Test-Path -Path $destination ) {
Write-verbose "Exporting $config to $destination"
Try {
Copy-item $config -destination $destination -errorAction Stop -passthru
}
Catch {
Throw
}

#post processing
Write-Verbose "Post processing"
$newConfig = Join-Path -Path $destination -ChildPath "$($v.id).xml"
[xml]$new = Get-Content -path $newConfig

$prop = $new | select-xml -XPath "//properties/name"

#insert a note
Write-Verbose "Inserting a note"
$notes = $new.selectNodes("//notes")
$noteText = @"
Exported $(Get-date) from $($prop.node.innertext)
$($notes[0].InnerText)
"@
$notes[0].InnerText = $noteText

if ($NewName) {
#rename
Write-Verbose "Renaming to $Newname"
$prop.node.InnerText = $NewName
} #if new name

#remove drives
if ($RemoveDrives) {
Write-Verbose "Removing drive references"
$drivenodes = $new | Select-Xml "//*[starts-with(name(),'drive')]"

foreach ($item in $drivenodes) {
$pn = $item.node.parentnode
$pn.RemoveChild($item.node) | Out-Null
}

} #remove drives

#remove snapshots
if ($RemoveSnapshots) {
Write-Verbose "Removing snapshot references"
$snapnodes = $new | Select-Xml "//snapshots"
$snapnodes.node.RemoveChild($snapnodes.node.list) | Out-Null

$pii= $new | Select-Xml "//parent_instance_id"
$new.configuration.properties.RemoveChild($pii.node) | Out-Null

} #remove snapshots
#save revised configuration
Write-Verbose "Saving XML to $newconfig"
$new.save($newconfig)

} #if destination verified
} #If configuration file found
else {
Write-Warning "Failed to find a configuration file for $($v.name)"
}
} #foreach
} #if VM
} #close scriptblock

Invoke-Command -ScriptBlock $sb -ComputerName $Computername -ArgumentList $VerbosePreference
} #process

End {
Write-Verbose -Message "Ending $($MyInvocation.Mycommand)"
} #end

} #end funtion

The command has complete help and examples.

The heart of the command is a series of steps to modify the XML document and save it to the new location. For example, one of the tasks I wanted to accomplish was to add a note to the configuration that reflected this was exported. PowerShell makes it an easy chore to modify XML documents.

[xml]$new = Get-Content -path $newConfig

$prop = $new | select-xml -XPath "//properties/name"

#insert a note
Write-Verbose "Inserting a note"
$notes = $new.selectNodes("//notes")
$noteText = @"
Exported $(Get-date) from $($prop.node.innertext)
$($notes[0].InnerText)
"@
$notes[0].InnerText = $noteText

Or I can rename and remove drives.

if ($NewName) {
#rename
Write-Verbose "Renaming to $Newname"
$prop.node.InnerText = $NewName
} #if new name

#remove drives
if ($RemoveDrives) {
Write-Verbose "Removing drive references"
$drivenodes = $new | Select-Xml "//*[starts-with(name(),'drive')]"

foreach ($item in $drivenodes) {
$pn = $item.node.parentnode
$pn.RemoveChild($item.node) | Out-Null
}
} #remove drives

The function uses PowerShell remoting so that you can run the command against a Hyper-V host from your desktop. In fact, your desktop doesn’t need any of the Hyper-V commands because all of those are running on the remote server. This also means that the path you specify, which must already exist, is relative to the remote computer. The export process will create a subfolder under this path for each virtual machine. If you take advantage of the rename process, then the subfolder will reflect the new name.

Now export, or backup, just the configuration for select virtual machines on my Hyper-V server.

PS C:> Export-VMConfiguration -Name CHI* -Path D:exports -Computername chi-hvr2 –Verbose

The result:

Or I can get a virtual machine and pipe it to my export command.

PS C:> get-vm chi-win81 -ComputerName chi-hvr2 | Export-VMConfiguration -path d:exports -NewName CHI-ClientBase -RemoveDrives –RemoveSnapshots

This will get a single virtual machine, rename it to CHI-ClientBase, and remove hard drive and snapshot references.

Advertisement

Posted 02/02/2015 by Petri in VMware

Microsoft’s View on Hyper-Convergence   Leave a comment


What is Hyper-Convergence?

The traditional deployment of a vSphere or Hyper-V farm has several tiers connected by fabrics. The below diagram shows a more traditional deployment using a storage area network (SAN). In this architecture you have:

Storage trays
Switches (iSCSI or fiber channel)
Storage controllers
Virtualization hosts (application servers)

In the Hyper-V world, we are able to do a software-defined alternative to a SAN called a Scale-Out File Server (SOFS), where:

RAID is replaced by Storage Spaces
Storage controllers are replaced by a Windows Server transparent failover cluster
iSCSI and fiber channel are replaced by SMB 3.0

But for the most part, the high-level architecture doesn’t really change:

In the world of hyper-convergence, we simplify the entire architecture to a single tier of servers/appliances that run:

Virtualization
Storage/cluster network
Storage

I say “simplify” but under the covers, each server will be running like a hamster on a wheel … on an illicit hyper-stimulant.

What Microsoft Says About Hyper-Convergence

Hyper-convergence is a topic that has only been getting headlines for the last year or so. This is mainly thanks to one vendor that is marketing like crazy, and VMware promoting their vSAN solution. I had not heard Microsoft share a public opinion about hyper-convergence until TechEd Europe 2014 in October.

The message was clear: Microsoft does not think hyper-convergence is good. There are a few reasons:

The method of growing a hyper-converged infrastructure is to deploy another appliance, adding storage, memory, processor, and license requirements. This is perfect if your storage and compute needs grow at the same scale, but this is about as rare as gold laying geese. Storage requirements normally grow much faster than the need for compute. We are generating more information every day and keeping it longer. Adding the costs of compute and associated licensing when we could be instead growing an affordable storage tier makes no sense.
Performance: Hyper-converged storage is software defined. Software-defined storage, when operated at scale, requires compute power to manage the storage and perform maintenance (such as a restore after a disk failure). Which do you prioritize in a converged infrastructure: virtual machine computational needs, or the functions of storage management, which in turn affect the performance of those same virtual machines?

This is why Microsoft continues to push solutions based on a storage tier and a compute tier, which are connected by converged networks, preferably leveraging hardware offloads (such as RDMA) and SMB 3.0. The recently launched Dell/Microsoft CPS is such a solution.

Posted 02/02/2015 by Petri in VMware

6 Hyper-V Predictions for 2015   Leave a comment


It’s time once again for every tech author to make dauntless divinations about what will happen in the IT industry in the following year. Hopefully I will take delivery of my robot chauffeur and moon buggy soon, but in the meantime much like like I did last year, I’ll make some predictions about what I think will happen in the world of Microsoft virtualization in 2015. I sure hope I do better this time around than I did for 2014!

1. Docker Hype

We’re going to be hearing a lot about a technology called Docker. This snowball started to roll in recent months when Microsoft announced Azure support for Docker via a partnership with the company. Docker can work in Azure right now through a command-line interface (CLI) that was launched for Windows clients, and we know that Docker is coming to Windows Server in the future.

Docker is more so an addition to Azure or Hyper-V, rather than being viewed as a distinct competitor. In machine virtualization we have isolation between machines, with each machine having its own operating system. This security boundary contains an OS, libraries, and typically one application. Containerization is done on a per application deployment basis. A Docker engine runs on a single operating system. Containers for different applications are deployed on shared libraries that are hosted by this shared operating system. The Docker engine could be installed on a physical server, or it could be run in the guest OS of a virtual machine (Azure or Hyper-V).

The benefit of Docker is that you can deploy applications nearly instantly from a library of containerized apps. This is a great benefit in a private deployment where change is frequent, such as in a DevOps environment. For example, a farm of a few virtual machines could run lots of application instances.

I think we are going to hear a lot about Docker in 2015, with the trickle becoming a torrent. But I don’t think many IT pros will use Docker in 2015, as it will be early days, and I think Docker’s market is geared toward larger organizations that demand a pace of change that even a flexible private/public cloud can offer.

2. Windows Server Licensing Will Switch to Per-Core

At the moment, we purchase Windows Server (and therefore Hyper-V virtual machines running Windows Server) on a per-physical processor basis. Each Windows Server license covers two physical processors, no matter how many cores are in each processor. If you license a physical host with two Intel E5-2699 v3 processors (18 cores each = 32 total cores or 64 logical processors), then you can license that host for an unlimited number of virtual machines with just one copy of Windows Server Datacenter Edition, which costs $6,155 with Open NL licensing. Remember that this license can be installed on the host and you can enable Hyper-V for free.

Here’s the issue: Just a year ago, you most likely required four processors and two datacenter licenses to get a host with that sort of core density. Microsoft would have made $12,310 on Open NL license fees. Intel continues to increase the core density of hosts, and this is reducing Microsoft’s earning power in larger host deployments.

I believe that Microsoft will switch from a per-pair of processors licensing model to a per-core pricing system with the release of Windows Server vNext. I can understand why because newer processors will reduce their earnings. Hopefully Microsoft will calculate the per-core pricing based on the typical processor (6 -8 cores) that most of us still use. This would mean that we wouldn’t necessarily see a price rise

3. Windows Server vNext Release Details

I could be lazy and tell you that Windows Server vNext will arrive in the second half of 2015, but everyone in the business knows that. Instead, I will take some chances. I am guess-timating that general availability (not RTM) of Windows Server will be in September.

Microsoft typically uses the same naming system as games by EA Sports. The reasoning for this is simple — Microsoft’s financial year starts in July, so anything released after July uses the following year. The next version of Windows Server will be released in Microsoft’s financial year of 2016 and will be called Windows Server 2016.

4. Microsoft Azure Momentum

If you attend any Microsoft events or you have any dealings with a Microsoft employee, then expect to hear a lot about Azure in the first six months of 2015. Microsoft is going to be doing a lot get you hooked on their public cloud. Microsoft wants customers to try out their public and hybrid offerings. If you’ve got an on-premises environment, then they’ll be making big pushes with online backup and disaster recovery in the cloud. Services such as site-to-site VPN and ExpressRoute have had recent upgrades, and the number of WAN partners will continue to grow. Microsoft Ignite (the super conference in Chicago in May 2015) will seem like a great big cloud conference!

No matter what you might think of the public cloud (and I was a skeptic of Microsoft’s offerings!) you cannot get over:

The pricing
The incredible rate of improvement of their infrastructure-as-a-service (IaaS) offerings
The hybrid offerings that supplement (not replace) on-premises hardware.

A lot more IT pros will be managing some services in Azure by the end of 2015 than at the end of 2014, but the focus will remain on hybrid cloud.

5. You Still Won’t Be Using PowerShell

This is a depressing prediction by me. Most of you reading this article will not be using PowerShell to simplify management, deployment, and operations and enable automation. I am not a programmer, but I’ve taken to PowerShell, where I used it to drive the demos from my presentation at TechEd Europe 2014. I use PowerShell to rebuild my demo lab at work. I use PowerShell scripts to provision large numbers of virtual machines in Hyper-V or Azure. And I use PowerShell to get predictable and fast results. There was a certain time investment to get all that stuff right, but as all scripters know, we start off by blatantly copying and pasting from other people’s work from the many available online resources and tweak from there. The investment, even in small deployments, is recouped later on when that script, a function, or a cmdlet is reused over and again.

But no matter what I say, no matter what Jeffrey Snover says, and no matter what anyone in Microsoft says, most of you will ignore our advice on using PowerShell and will continue to use a subset of the product that they have purchased by not adventuring beyond the GUI.

6. Hyper-Convergence Was the Rubik’s Cube of 2014

A number of hyper-convergence specialist companies have been shouting very loudly and very expensively about the benefits of their kind of compute and storage deployment. I get the feeling (my opinion and not the official view of Petri.com) that these companies are more successful at getting investment and the attention of the media than at selling their products.

I think hyper-convergence will go the way of tab, parachute pants, leg warmers, Furby, the Rubik’s Cube, and music on MTV, and we’ll get back to operating compute on one tier and storage on another, connected by a high-speed fabric.

Posted 02/02/2015 by Petri in VMware

5 Hyper-V Skills   Leave a comment


1. Hyper-V Deployment

I do not argue that Hyper-V still has a smaller total market share than vSphere. But we have evidence from IDC that Hyper-V continues to grow while vSphere is shrinking. And outside of the USA, at least here in Europe, Hyper-V has a much larger presence than in the USA, possibly because we Europeans were slower to adopt virtualization. It’s for this reason that IT pros should start to learn how to deploy Hyper-V… correctly.

I work for a distributor, so I am normally one-step removed from working on customer sites; I sell to the resellers, I teach the resellers, and from time-to-time, I help the resellers or my local Microsoft office out when there is an issue with a project. When I am called into a Hyper-V project it is more often than not because a VMware consulting company has been trying to deploy Hyper-V with a design that they would normally use for vSphere. My Hyper-V MVP colleagues from around Europe have a plethora of stories of when they’ve been called to undo the damage done in Hyper-V deployments by vSphere experts.

Let me ask you a simple question: would you use the same techniques for a gasoline engine as you would for a diesel engine? Of course not, so why would anyone assume that the way to design vSphere is universally correct? I would not assume that how I would deploy Hyper-V is universally correct. If you are going to be working with Hyper-V then you need to learn how to deploy Microsoft hypervisor correctly, including Failover Clustering, Windows Server networking, Windows Server storage, and more.

2. Learn System Center Virtual Machine Manager (SCVMM)

If you are going to deploy a larger Hyper-V farm, then you should strongly consider deploying System Center Virtual Machine Manager (SCVMM). For you vSphere folks, SCVMM is not Microsoft’s answer to vCenter; it is much more than a central console for hosts and virtual machines. I believe SCVMM should be renamed to System Center Fabric Manager because it is the tool that Microsoft envisions owners of larger systems will use to manage storage and networking, deploy bare-metal hosts and Scale-Out File Servers, and provision highly-available services with deep integration into any available load balancers.

In my experience, SCVMM is underused by those customers who do buy System Center licensing. Yes, Hyper-V Manager, Failover Cluster Manager, and PowerShell can do a lot, but SCVMM brings more to the table, especially in environments where change is frequent. There are three kinds of Hyper-V customers who buy System Center:

Sites that deploy other parts of System Center, typically SCOM and SCCM, but not SCVMM and miss out on lots of functionality.
Customers that do have SCVMM installed, but use it as nothing more than a centralized Hyper-V Manager.
The minority of sites that do make great use of SCVMM.

Not everyone can afford System Center, and there are those that have concerns with the complexity of SCVMM. But for those of you that do have that licensing, my advice is that you learn the tool and make better use of it. If you are in the role of service delivery (internal or external customers), then SCVMM is the starting point for build a Windows Azure Pack (WAP) powered private or public cloud.

3. Understand SMB 3.0

If you read my posts then it should be no surprise that I am advocate of using Scale-Out File Server (SOFS) storage for Hyper-V virtual machines. SOFS was the first technology to make real use of Microsoft strategic data transmission protocol, SMB 3.0. In the role of storage connectivity between hosts and a file share, SMB 3.0 competes against the likes of iSCSI, fiber channel and fiber channel over Ethernet (FCoE). However, Microsoft had bigger plans for SMB 3.0. In Windows Server 2012 R2, SMB 3.0 can be used for Live Migrations running at speeds of 10 Gbps or faster. In Windows Server vNext, SMB 3.0 expands again for synchronous and asynchronous replication between Windows servers.

SMB 3.0 is here to stay. There are three components or concepts you need to learn:

  • SMB Multichannel: What NICs are selected and how to control this
  • SMB convergence: How to design converged networks with bandwidth controls
  • SMB Direct: You are going to be pushing huge amounts of data, so you need to learn about hardware offloading via Remote Direct Memory Access (RDMA) via iWarp, RoCE, or Infiniband

4. Study Windows Server Converged Networking

For those of you running more than a few hosts, you need to release that death grip on 1 GbE networking. 10 GbE is here, and for some of you, you need to be looking at 40 Gbps, 56 Gbps, or even faster! This isn’t because of Hyper-V; this is because you are creating larger virtual machines, generating more data, and retaining data for longer than ever before. Stuff needs to move across networks, and it needs to happen quickly.

This typically requires investing in expensive switches, where the NICs don’t actually have to be that expensive if you shop wisely, so you need to make the very most of every switch port. This is why we deploy converged networking, once the reserve of hardware solutions, such as those found in blade server enclosures.

Right now, many people are deploying Hyper-V hosts with lots of 1 GbE NICs, or they’re using very expensive converged-networking hardware. Those who use 1 GbE networking have bandwidth limitations for individual services or virtual machines. And most of those who are using Windows Server 2012 R2 on blade servers have grown frustrated with Emulex, who have near-monopoly status in this market, taking a year to resolve outage issues with the firmware and drivers of their converged networking NICs.

I have been an advocate of using the features of converged networking that are free in Windows Server. Place a pair of 10 GbE or faster NICs into your host, team them, create a Hyper-V virtual switch that connects to that team, and connect the host and virtual machines to ports on the virtual switch. Now the host and virtual machines have access to 10 Gbps networking while consuming just two top-of-rack (TOR) switch ports. Don’t get caught up in CAPEX costs. Remember that the cost of powering six to 10 x 1 GbE posts per host is probably much more than powering two x 10 GbE switch ports per host. OPEX cost savings over three years must be accounted for, too!

4. Embrace Microsoft Azure

2015 will be the year that you finally have to admit that you’re going to do some stuff in the cloud. I came to that realization in early 2014 after very successfully ignoring Office 365 and Microsoft Azure for a long time. Your bosses are going to be hearing a lot in the media and directly from Microsoft about the cloud. Pressure will build, and if you won’t do it, they’ll find someone who will.

There are easy ways to embrace the cloud to supplement your on-premises environment. Everyone struggles with the costs of disaster recovery. Maybe Azure Site Recovery will give you an reasonably-priced solution for disaster recovery in the cloud? I hated working with tapes and tape drives, so maybe backup to Azure will service your needs for off-site backup and archiving? Do you need a hybrid web presence? If so, check out Azure website hosting plans, starting with free offerings. Or do you need a test/dev lab for IT or for the developers and testers? If so, why spends money on hardware when you can pay for what you use in Azure with supervision and enforcement by IT?

5. Learn PowerShell

You’re using just 40% of the product you’ve invested in if you refuse to consider using PowerShell. Most of the magic that I show and talk about is driven by PowerShell and does not have a UI. I guarantee you that this percentage will decrease in the next version of Hyper-V. If you worked for me, I’d fire you. Harsh words, but sometimes a kick in the rear is required.

2015 will be a year of disruption in the world of Hyper-V. There will be the release of Windows Server vNext in the second half of the year. Azure will be purchased by more and more CIOs, with or without the cooperation of their IT departments, but there’ll be no choice about deploying services in the public cloud. And storage and networking will continue to evolve. You can prepare yourself for a very interesting year by spending some time learning the above technologies.

Posted 02/02/2015 by Petri in VMware

Enable Telnet Client in Windows 8 and Server 2012   Leave a comment


The Telnet client is one of the most basic connectivity and management tools that any IT professional needs, and this article will show you how to enable Telnet client in Windows Server 2012 and Windows 8. The Telnet client not only lets you connect to a remote Telnet server and run applications on that server, but is also useful for testing connections to remote servers, such as ones running web services, SMTP services and others.

Using the Telnet client is simple enough and the use of Telnet clients has been covered in several different articles on the Petri IT Knowledgebase. The idea is that once the user has logged on, they can use a command prompt interface that can be used as if it had been opened locally on the Telnet server’s console, and any command the user types is sent to the Telnet Server and executed there. The output from that command is sent back to the Telnet client.

Telnet Client Options

Note: There are many Telnet client tools, where many of them are freely available on the Internet. There are even smartphone and tablet versions that you can download from Google Play or the Apple Store, depending on the OS version on your mobile device. For example, PuTTY is one of the most used apps, as it can perform many types of remote connections, including to Telnet servers.

How to install the Telnet Client for Windows 8 and Server 2012

The Telnet client is a feature that has been included with Microsoft operating systems since Windows NT. However, it is not enabled by default for later OSs, where this started back with Windows Server 2008/Vista. So unless you are going to use a third-party tool to assist you when you perform your remote connection and connectivity troubleshooting work, you want to enable Telnet client on your machine. Just in case you need it.

There are several methods for installing or enabling the Telnet client in Windows Server 2012/R2/8.

Install the Telnet client from the GUI

There is a difference between Windows Server 2012/R2 and Windows 8.

1. In Windows Server 2012/R2, open Server Manager from the taskbar icon or from the Start page.
2. Click "Manage" and then "Add Roles and Features".
3. Click "Next" four times until you get to the "Select Features" page.
4. Click to select the "Telnet Client" feature. Click "Next".
5. Click "Install".
6. You can click "Close". No need to wait for the installation to complete.
7. In Windows 8 open Control Panel and click on "Uninstall a program" under "Programs".
8. Click on the "Turn Windows features on or off" link.
9. Click "OK", and the feature will be installed.

Install the Telnet client from the Command Prompt

1. Open the Command Prompt window with elevated permissions (Run as Administrator).

Opening the command prompt

2. In the Command Prompt window type:
dism /online /Enable-Feature /FeatureName:TelnetClient

3. Once command finished, Telnet client will be installed.

Install Telnet client from Windows command prompt

Install the Telnet client with PowerShell

1. Open the PowerShell window with elevated permissions (Run as Administrator).

Install the Telnet client with PowerShell

2. In the PowerShell window type the following line:
Import-Module servermanager

3. Then type:
Add-WindowsFeature telnet-client

Posted 02/09/2014 by Petri in VMware

What’s New in VMware vSphere 6.0   Leave a comment


VMworld 2014 is coming to a close, and VMware administrators are just starting to sort through all of the news, new products, and other information coming out of the event. I’ve already written a bit about EVO: RAIL, the vRealize Suite, and vCloud Air, but there was a lot of other information released this week that deserves a deeper dive. One product that was mentioned often by VMware executives was the upcoming VMware vSphere 6.0 release, which VMware CEO Pat Gelsinger said was now available in public beta form.

Yet while many VMware execs mentioned individual features of vSphere 6.0 in piece-meal fashion, there wasn’t an umbrella announcement for vSphere 6.0, which means that we’re likely to get more official news in the weeks in months to come. That said, there was enough information release during VMworld for us to start assembling a picture of what vSphere 6.0 will have to offer feature-wise, so I’ve cobbled together some of the available information below.

Note: Given that VMware vSphere 6.0 is still in beta form, I’d expect the following list of features to be tweaked and revised as the product gets closer to final release. I’ll continue to update this post as new information becomes available, so please bookmark this page for future reference. If you know of a vSphere 6.0 feature that isn’t listed here, please drop me an email and I’ll credit you and add it to the post.

New Features in VMware vSphere 6.0

So what new features will VMware vSphere 6.0 have to offer? Some information was made public this week at VMworld 2014, and I’ve assembled the following list of new features, largely gleaned from the day one and day two keynotes, as well as some of the sessions and other information released at the show.
Virtual Volumes (VVols)

Mentioned during the day 1 keynote by VMware CEO Pat Gelsinger, virtual volumes (VVols) take the software-defined mindset and apply it to external storage. VMware’s Rawlinson Rivera goes into more detail as to what VVols can offer via a blog post on the VMware website, writing that VVols serves up an approach to storage in which an "individual virtual machine and its disks, rather than a LUN, become a unit of storage management for a storage system. Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system."

VVols were previewed years ago at VMworld 2012, and VMware has steadily been working on the technology since then. In the video embedded below (and in a companion VVols blog post) VMware gives a bit more information on what VVols is.

Fault Tolerance for Multi-Processor Virtual Machines

One long-awaited vSphere feature was support for fault tolerance for multi-processor virtual machines, and that functionality will be added to VMware vSphere 6. With vSphere 6. VMs with up to 64GB RAM and 4 vCPUs will be covered under with fault tolerance, which Raghuram said should "…provide these application with zero downtimes."

vMotion Improvements

VMware vMotion is one of the most popular features vSphere features, and allows running virtual machines to be shifted from one physical server (or several) without any downtime. VMware’s Raghu Raghuram mentioned during the VMworld 2014 Day 2 keynote that they were planning to make several improvements to vMotion in vSphere 6, namely Cross vCenter vMotion and Long Distance vMotion.

  • Cross vCenter vMotion: This feature will allow applications to be migrated from physical racks managed by different instances of VMware vCenter. Prior to vSphere 6, migrating VMs between different instances of VMware vCenter wasn’t possible, so this feature should make the lives of VMware administrators a bit easier.
  • Long Distance vMotion: In addition to allowing admins to move VMs between instances of vCenter, vSphere 6 will also support the migration of applications from one datacenter to another datacenter located across the country. Raghuram said that — used in conjunction with updates to VMware NSX — network properties of apps being migrated this way won’t have to be updating, adding that the technology is "science fiction in action."

Used in conjunction, these new features will improve load balancing and the performance of applications, and will provide for "proactive disaster avoidance" and seamless data center migration.

Posted 29/08/2014 by Petri in VMware

How Do I Manage Hyper-V?   Leave a comment


​Hyper-V Management Basics for Small to Medium Deployment

Hyper-V Manager is the basic administration tool that is included in Windows Server and Windows 8/8.1 Pro/Enterprise. We normally use Hyper-V Manager for the following scenarios:

  • Managing a small number of hosts
  • Configuring host settings
  • Creating and managing virtual machines on non-clustered hosts
  • Troubleshooting a host

It’s typically bad practice to regularly log into hosts to manage them. You should enable Hyper-V Manager on your PC to manage your hosts. Not only is this a better practice, but it also makes administration quicker and easier. If you’re working in a business that’s using old technologies, such as Windows 7, then you can deploy a Windows Server 2012 RDS server, install Hyper-V Manager on it, and publish the application to Hyper-V administrators. This makes deploying Windows 8.1 for IT seem much easier and more economical.

You must use a version of Hyper-V Manager that is compatible with the hypervisor version. For example, you cannot use Hyper-V Manager for Windows Server 2008 R2 to manage Windows Server 2012, as there is a mismatch of WMI versions. However, you can use the Hyper-V Manager in Windows 8.1 to manage Windows Server 2012 Hyper-V.

Highly Available Virtual Machines

If you deploy clustered hosts, then you will actually need to use two tools. If you want a virtual machine to be highly available (HA), then you will deploy and configure that virtual machine using Failover Cluster Manager. Similar to Hyper-V Manager, you should not use Failover Cluster Manager on the hosts. Instead, you should install it on your PC.

Unlike Hyper-V Manager, you will have to download the Remote Server Administration Toolkit for your version of the Windows client OS. Note that you must run Windows 8.1 on your PC to remotely manage Windows Server 2012 R2, Windows 8.1 or 8 to manage Windows Server 2012, and Windows 7 to manage Windows Server 2008 R2.

You will continue to use Hyper-V Manager to configure host settings, with one exception; Live Migration settings are configured in Failover Cluster Manager. Non-clustered hosts require a bit more security work to enable Live Migration, but a cluster has its own security boundary that makes Live Migration almost a click-and-go experience.

Creating virtual machines in Hyper-V Manager on a Hyper-V cluster is a mistake that is commonly made by people that are new to Hyper-V. The resulting virtual machine is not HA, but you can fix this with the following steps:

  • Using Live Storage Migration to move the virtual machines files to the cluster’s storage if required.
  • Using Configure Role to add the existing virtual machine to the management scope of the cluster. No restarts are required of the virtual machine.

Prior to Windows Server 2012 Hyper-V, a common mistake was to configure virtual machine settings in Hyper-V Manager. Those settings would be lost if the virtual machine moved nodes because the settings were not saved in the cluster’s database. If you forget to use Failover Cluster Manager you can update the settings of the virtual machine in the cluster. Note that Windows Server 2012 prevents you from editing the settings of HA virtual machines in Hyper-V Manager.

Managing Hyper-V in Medium to Large Deployments

There are several different scenarios where you will purchase System Center with per-host licensing and use System Center Virtual Machine Manager (SCVMM) to manage Hyper-V, including:

  • Scale: You need a solution to centrally manage lots of hosts/clusters.
  • Cloud: You will be deploying public or private clouds.
  • Deployment: You need faster host, storage, virtual machine, or service deployment.
  • Delegation: You want to enable delegation of administration, enabling some administrators to manage some hosts.

Although SCVMM can be used to deploy Hyper-V clusters in theory, I still prefer to configure and manage clusters using Failover Cluster Manager, where I use SCVMM’s bare metal deployment to create the hosts. I have found that SCVMM is not so hot in this department.

Note that SCVMM is just one of eight products in the System Center license suite. You will choose from the System Center menu to enable other elements of management, protection, and automation.

Hyper-V Automation

PowerShell yes, I said it. If you want to do things at scale, get repeatable and predictable results, and do it quickly, then invest some time in Windows PowerShell to get the job done. And you’ll also get access to deeper features that are not otherwise revealed in a GUI. You can use cmdlets from Hyper-V, Failover Cluster Manager, and SCVMM to write your scripts in the easy to use Integrated Scripting Editor (ISE).

Using Hyper-V, System Center, and Windows Azure Pack

You will deploy a cloud whenever you want to enable self-service deployment of virtual machines and services. The front-end of a Microsoft cloud based on Hyper-V and System Center is the Windows Azure Pack (WAP). There are two portals:

  • The administrative portal: Used to manage the cloud
  • The tenant portal: Used by users of the cloud to deploy virtual machines and services

Note that you will continue to use System Center to manage and monitor the fabric of the cloud.

Posted 20/08/2014 by Petri in VMware

Hyper-V Virtual Machine Virtual Network Adapters Explained   Leave a comment


In Hyper-V, a virtual machine has one or more virtual network adapters, sometimes also called virtual NICs or vNICs. A vNIC connects to a host’s virtual switch. This allows the virtual machine to potentially talk to other virtual machines on the same virtual switch. An external virtual switch has a port that connects to a physical NIC on host. And this allows virtual machines that are connected to an external virtual switch to talk on the LAN, and potentially on the Internet. Note that this all assumes that machines are on the same VLAN, are routed, don’t have firewalls blocking communications, and that other virtual technologies such as Hyper-V Network virtualization or Port ACLs aren’t in the way. In this post I will discuss the types of NICs available and how to add them to a virtual machine.

Types of Virtual NICs

Hyper-V offers two kinds of virtual NICs that can be used in virtual machines – one for the past and one for now.

Synthetic Network Adapter

The first kind is simply known as a “network adapter,” but you can think of it as the synthetic network adapter. The synthetic network adapter requires that the guest OS is Hyper-V-aware; in other words, the child partition is enlightened or it is running either the integration components for Windows or the Linux Integration Services.

Hyper-V will add a single synthetic vNIC into a virtual machine’s specification by default. You can add up to eight synthetic vNICs into a single virtual machine. The below screen shot shows a generation 1 virtual machine with a single synthetic vNIC. Note that this vNIC has a name (VM01). By default, a synthetic vNIC is called “network adapter” when created in Hyper-V Manager or in Failover Cluster Manager. This virtual machine was created using System Center Virtual Machine Manager (SCVMM), so the vNIC was given a label. You can also name vNICs using PowerShell, which can be handy if you do want to create lots of vNICs in a single virtual machine.
Synthetic Network Adapter

A default synthetic network adapter in a generation 1 virtual machine.

Legacy Network Adapter

The second kind is a legacy network adapter. You can have up to four legacy network adapters in a virtual machine. The name suggests the primary focus of this type of vNIC; the legacy network adapter is intended to be used in unenlightened virtual machines that do not have the integration components or Linux Integration Services installed.

A common question on forums is "Why can’t my Windows XP VM connect to a network?" The reason is either of the following must be true:

* The guest OS must be Windows XP SP3 to support the Hyper-V integration components to use the default synthetic network adapter.
* You have replaced the synthetic network adapter with legacy network adapter, and remembered to configure the TCP/IP stack.

Another reason to use the legacy network adapter is that it offers support for PXE network boots. Synthetic vNICs do not have support for PXE in generation 1 virtual machines. The use of virtual machines in System Center Configuration Manager (SCCM) and Windows Deployment Services (WDS) for developing and testing OS deployment is common, so you will find yourself using legacy vNICs quite a bit if using generation 1 virtual machines.
Legacy Network Adapter

Legacy Network Adapter settings in a generation 1 virtual machine.

Hyper-V uses synthetic network adapters for a reason; that’s because they offer more functionality and they offer better performance. Legacy network adapters are less efficient, causing more context switches between kernel mode and user mode on the host processor.

Generation 2 Virtual Machines

The generation 2 virtual machine was added in Windows Server 2012 R2 (WS2012 R2) Hyper-V to give us a new virtual machine virtual hardware specification that was legacy-free. It should therefore come as no surprise that generation 2 virtual machines do not offer legacy network adapters.

Generation 2 virtual machines only support synthetic network adapters. You can have eight of these efficient vNICs in a single generation 2 virtual machine.
There is no need for legacy network adapters in generation 2 virtual machines

Generation 2 virtual machine with many network adapters.

There is no need for the legacy network adapter in generation 2 virtual hardware. Remember that generation 2 virtual machines only support 64-bit edition of Windows 8 or later, and Windows Server 2012 (WS2012) and later. That means we don’t have an issue of enlightenment. Thanks to the new virtual hardware specification, Microsoft was able to add PXE functionality to generation 2 synthetic vNICs.

Posted 06/08/2014 by Petri in VMware

Hyper-V Virtual Machine Virtual Network Adapters Explained   Leave a comment


In Hyper-V, a virtual machine has one or more virtual network adapters, sometimes also called virtual NICs or vNICs. A vNIC connects to a host’s virtual switch. This allows the virtual machine to potentially talk to other virtual machines on the same virtual switch. An external virtual switch has a port that connects to a physical NIC on host. And this allows virtual machines that are connected to an external virtual switch to talk on the LAN, and potentially on the Internet. Note that this all assumes that machines are on the same VLAN, are routed, don’t have firewalls blocking communications, and that other virtual technologies such as Hyper-V Network virtualization or Port ACLs aren’t in the way. In this post I will discuss the types of NICs available and how to add them to a virtual machine.

Types of Virtual NICs

Hyper-V offers two kinds of virtual NICs that can be used in virtual machines – one for the past and one for now.

Synthetic Network Adapter

The first kind is simply known as a “network adapter,” but you can think of it as the synthetic network adapter. The synthetic network adapter requires that the guest OS is Hyper-V-aware; in other words, the child partition is enlightened or it is running either the integration components for Windows or the Linux Integration Services.

Hyper-V will add a single synthetic vNIC into a virtual machine’s specification by default. You can add up to eight synthetic vNICs into a single virtual machine. The below screen shot shows a generation 1 virtual machine with a single synthetic vNIC. Note that this vNIC has a name (VM01). By default, a synthetic vNIC is called “network adapter” when created in Hyper-V Manager or in Failover Cluster Manager. This virtual machine was created using System Center Virtual Machine Manager (SCVMM), so the vNIC was given a label. You can also name vNICs using PowerShell, which can be handy if you do want to create lots of vNICs in a single virtual machine.

Legacy Network Adapter

The second kind is a legacy network adapter. You can have up to four legacy network adapters in a virtual machine. The name suggests the primary focus of this type of vNIC; the legacy network adapter is intended to be used in unenlightened virtual machines that do not have the integration components or Linux Integration Services installed.

A common question on forums is "Why can’t my Windows XP VM connect to a network?" The reason is either of the following must be true:

* The guest OS must be Windows XP SP3 to support the Hyper-V integration components to use the default synthetic network adapter.
* You have replaced the synthetic network adapter with legacy network adapter, and remembered to configure the TCP/IP stack.

Another reason to use the legacy network adapter is that it offers support for PXE network boots. Synthetic vNICs do not have support for PXE in generation 1 virtual machines. The use of virtual machines in System Center Configuration Manager (SCCM) and Windows Deployment Services (WDS) for developing and testing OS deployment is common, so you will find yourself using legacy vNICs quite a bit if using generation 1 virtual machines.

Hyper-V uses synthetic network adapters for a reason; that’s because they offer more functionality and they offer better performance. Legacy network adapters are less efficient, causing more context switches between kernel mode and user mode on the host processor.

Generation 2 Virtual Machines

The generation 2 virtual machine was added in Windows Server 2012 R2 (WS2012 R2) Hyper-V to give us a new virtual machine virtual hardware specification that was legacy-free. It should therefore come as no surprise that generation 2 virtual machines do not offer legacy network adapters.

Generation 2 virtual machines only support synthetic network adapters. You can have eight of these efficient vNICs in a single generation 2 virtual machine.

There is no need for the legacy network adapter in generation 2 virtual hardware. Remember that generation 2 virtual machines only support 64-bit edition of Windows 8 or later, and Windows Server 2012 (WS2012) and later. That means we don’t have an issue of enlightenment. Thanks to the new virtual hardware specification, Microsoft was able to add PXE functionality to generation 2 synthetic vNICs.

Posted 06/08/2014 by Petri in VMware

Creating a NIC Team and Virtual Switch for Converged Networks   1 comment


A common solution for creating a converged network for Hyper-V is to aggregate links using a Windows Server 2012 (WS2012) or Windows Server 2012 R2 (WS2012 R2) NIC team, such as in the below diagram. We will look at how you can do this in Windows Server and then connect a virtual switch that is ready for Quality of Service (QoS) to guarantee minimum levels of service to the various networks of a Hyper-V cluster.

What Is an NIC Team?

A NIC team gives us load balancing and failover (LBFO) through link aggregation:

  • Load Balancing: Traffic is balanced across each of the physical NICs that make up the team (team members) in the NIC team. The method of balancing depends on the version of Windows Server and whether the NIC team is being used for dense virtualization or not.
  • Failover: The team members of a NIC team are fault tolerant. If a team member fails or loses connectivity then the traffic of that team member will be switched automatically to the other team member(s).

NIC teams have two types of setting. The first is the teaming mode:

  • Switch Independent: The NIC team is configured independently of the top-of-rack (TOR) switches. It allows you to use a single (physical or stack) or multiple independent TOR switches.
  • Static Teaming (switch dependent): With this inflexible design, each switch port connected to a team member must be configured for that team member.
  • LACP (switch dependent): Switch ports must be configured to use LACP to automatically identify team member connections via broadcast by the NIC team.

Load balancing configures how outbound traffic is spread across the NIC team. WS2012 has two options (with sub-options) and WS2012 R2 has three options (with sub-options):

  • Address Hash: This method hashes the destinations of each outbound packet and uses the result to decide which team member the traffic should traverse. This option is normally used for NIC teams that will not be used by a Hyper-V virtual switch to connect to a physical network.
  • Hyper-V Port: Each virtual NIC (management OS or virtual machine) is associated at start up with a team member using round robin. That means a virtual NIC will continue to use the same single team member unless there is a restart. You are still getting link aggregation because the sum of virtual NICs are spread across the sum of the team members in the NIC team.
  • Dynamic: This is a new option in WS2012 R2. Traffic, even from virtual NICs, is spread pretty evenly across all team members across all team members.

Note that inbound traffic flow is determined by the TOR switch and is dependent upon the teaming mode and load balancing setting.

Recommended Settings

We’ll keep things simple here, because you can write quite a bit about all the possible niche scenarios of NIC teaming. The teaming mode will often be determined by the network administrator. Switch independent is usually the simplest and best option.

If a NIC team will be used by a virtual switch, then:

  • The NIC team should be used by that single virtual switch only. Don’t try to get clever: one NIC team = one virtual switch.
  • Do not assign VLAN tags to the single (only) team interface that is created for the NIC team.
  • On WS2012, load balancing will normally be set to Hyper-V Port.
  • On WS2012 R2, it looks like Dynamic (the default option) will be the best way forward because it can spread outbound traffic from a single virtual NIC across team members, unlike Hyper-V port.

Creating the NIC Team

Note: If you want to implement the VMM Logical Switch or use Windows/Hyper-V Network Virtualization then you should use VMM to deploy your host networking instead of deploying it directly on the host yourself.

There are countless examples of creating a WS2012 NIC team on the Internet using LBFOADMIN.EXE (the tool that is opened from a shortcut in Server Manager). Scripting is a better option, because a script will allow you to configure the entire networking stack of your hosts, consistently across each computer, especially if you have servers that support Consistent Device Naming (CDN). Scripting is also more time efficient: Simply write it once and run it in seconds on each new host. The following PowerShell examples require that the Hyper-V role is already enabled on your host.

The following PowerShell snippet will:

  • Rename the CDN NICs to give them role-based names. Change the names to suit your servers. You should manually name the NICs if you do not have CDN-enabled hosts.
  • Team the NICs in a switch independent team with Dynamic load balancing.

Rename-NetAdapter “Slot 2” –NewName VM1

Rename-NetAdapter “Slot 2 2” –NewName VM1

New-NetLBFOTeam –Name ConvergedNetTeam –TeamMembers VM1,VM2 –TeamingMode SwitchIndependent –LoadBalancingAlgorithm Dynamic

Converged Networks and Creating the Virtual Switch

In the converged networks scenario you should create the virtual switch using PowerShell. This allows you to enable and configure QoS. The following PowerShell snippet will:

  • Create a virtual switch with weight-based QoS enabled and connect it to the new NIC team
  • Reserve a share of bandwidth for any virtual NIC that does not have an assigned QoS policy. This share is assigned a weight of 50. That would be 50% if the total weight assigned to all items was 100.

New-VMSwitch “ConvergedNetSwitch” –NetAdapterName “ConvergedNetTeam” –AllowManagementOS 0 –MinimumBandwidthMode Weight

Set-VMSwitch “ConvergedNetSwitch” –DefaultFlowMinimumBandwidthWeight 50

What Has Been Accomplished in the Design

Running these PowerShell cmdlets will create a NIC team ready for a Hyper-V virtual switch. It’ll also create a virtual switch that is ready for Hyper-V converged networking

Posted 06/08/2014 by Petri in VMware