Review: Windows Server 2016 Technical Preview 4

In November, Microsoft released Windows Server 2016 Technical Preview 4. With the final release due out in the second half of this year, TP4 gives us the latest look at where Microsoft's flagship operating system is heading. There are several interesting spots to look at and some licensing news that is controversial. I put the release through its paces for around a month; here are my observations.

Nested virtualization and containers

Obviously the headline feature of Windows Server 2016 is the support for containers. Put simply, containers are entire deployments of software, including the operating system and application, packaged up nice and neat inside a, well, container, ready to deploy in the cloud. This can be either a public cloud or private cloud, or the container can live on your developer's laptop or in your data center, or at any point in between.

If virtualization abstracted away the physical components of a machine, then containers abstract away the virtual machines. You download a container, run it, and the virtual machines and apps within just figure out the container and make it work.

One of the big advantages to containers is their natural tendency to work with DevOps-oriented environments, where the developers are the testers and the operators and All Ye Shall Work in Harmonye or Be Fyred. When a developer can configure a virtual environment on his machine, get everything working correctly (including his application), ship it over in a nice neat single package container to the tester, and have that tester unzip and run the container as it is, that makes quick work of deploying new applications and shipping updates to those applications.

It also provides a quick step around the common refrain IT wonks hear all the time from developers when deploying their applications on IT's iron: "Well, it worked in my laptop so it must be your server!"

The previous version of Windows Server 2016 -- Technical Preview 3 -- introduced support for Docker-based containers. Docker is the largest container platform used today, and provides a management space and support for a variety of operating system platforms, including Windows Server, Azure, Linux, Amazon Web Services and more.

But in this beta release, Microsoft introduces Hyper-V based containers. Hyper-V containers are different than Hyper-V virtual machines, mainly because they are considered another virtualization option; Hyper-V containers are a lighter-weight virtual machine, highly isolated from everything else, while Hyper-V virtual machines are the same as they have been.

Hyper-V containers secure the container kernel from the operating system running beneath the containers, so they are not susceptible to certain attacks. And you can move them around from cloud to data center back to cloud as long as all destinations are running Windows Server 2016. They are a great way to run untrusted code or to set up hosting environments where you cannot be expected to vet every piece of software running on your system.

For more trusted applications and more sharing possibilities, traditional Docker-style containers are more capable.

Much of the container support in TP4 is enabled and managed through PowerShell, which should not be a shock. This new command takes care of the heavy lifting:


This creates a container host, which is the machine that runs the containers. What is interesting in these scenarios is that virtual machines can be container hosts, which themselves host the containers (which I termed above as lightweight virtual machines). Here we are achieving nested virtualization, or a virtual machine within a virtual machine, an extended abstraction.

Indeed, in Windows Server 2016 this is taken all the way to traditional Hyper-V virtual machines, so a VM can host its own VMs. These are most useful in lab training and testing environments, where complicated infrastructures can now be emulated (you can put DHCP and DNS in separate machines all within its own machine, for example).

The new container support in TP4 extends the vision of isolation and DevOps into the very core of Windows Server. It's a real asset to many environments where admins or others want to double down on abstracting away technicalities and let DevOps rule the roost.

Storage Spaces Direct

Storage Spaces Direct is brand new to Windows Server 2016 and you can consider it the next step in software-defined storage. This is where you have a bunch of commodity disks attached to commodity hardware and allow high-performance, configurable software to define the actual logical volumes on which data is stored.

As it exists in Windows Server 2012 R2, you can create scale-out file server clusters using Storage Spaces in two tiers; the first tier has all of the Hyper-V hosts that use SMB 3.0 networking to connect to storage. The second tier, the storage tier, is a cluster that attaches via SAS (the industry standard storage interconnect) to shared boxes full of disks, known as shared JBODs. These shared JBODs have Storage Spaces features applied to them so that they appear as a single pool accessible via file shares, and the virtual machine files and disks are stored on this tier.

Did you pick up on how this second tier is multilayered? It has the failover cluster nodes that control the access to the disks in the JBODs, and then the other layer of the JBODs themselves.

Storage Spaces Direct takes this a step further and basically eliminates the need for that SAS layer where the shared JBODs are attached to the cluster. Rather, the cluster itself has internal, directly attached disks that can store the virtual machine files. The disks that are directly attached are pooled using Storage Spaces and mirrored across the nodes in the cluster. Thus, the clustered shared volumes that live on those disks are also available through the cluster on each node.

There are a couple of specific benefits to choosing this option over regular scale-out file servers. For one, you can use either cheap SATA solid state drives or more performant (but more expensive) enterprise-class NVMe solid state drives. Before, you had to use SAS-compatible enterprise drives with no option to use the cheaper stuff for less intensive workloads.

With this Storage Spaces Direct feature, also you get a simpler deployment because you don't have to deal with that layer of shared disks and enclosures, nor the mess of using SAS. To scale or expand, you just add more nodes to your cluster, and the Storage Spaces technology automatically takes care of expanding and scaling the storage to take into account the additional capacity that the extra node or nodes provides.

On the other hand, for smaller deployments, the Storage Spaces configuration as it exists today with the shared JBODs can scale all the way down to a two-node cluster, which may be more appropriate for a smaller deployment. Both options are valid and supported; in 2016, what you get is the choice of one or the other.

Storage Replica

Another new feature in Windows Server 2016 is Storage Replica. As its most fundamental level, think of Storage Replica as a streaming copy of your data from one cluster or server to another cluster or server, regardless of geographic location. As writes and modifications are made to blocks of data on a volume, Storage Replica synchronously replicates that data and those changes from one cluster to another, from one server to another, or across cluster nodes that are stretched between data centers for better redundancy and fault tolerance.

Storage Replica is like RAID0 scaled out to groups of servers, clusters or even groups of clusters. This is a crash-consistent (in that the data stays consistent across the members of the cluster during and after a crash), transactionally aware, mirroring feature within a data center. You can also use asynchronous replication across WAN links and the Internet without as high a degree of confidence -- but it's still a reasonable one.

This all can happen over SMB3, so the transfers are quick, and you can use standard TCP/IP across a routable network on RDMA on storage networks. You can also take advantage of super-fast storage bus transfers with iWARP and InfiniBand to make these data replications lightning quick. Very useful and very cool.

However, one thing might mar this feature, and that one thing is...


Of course, as technically interesting and high quality as this product will end up being, you still have to wrestle with the licensing beast, and Microsoft is tinkering with the formula again. Windows Server 2016 has been moved over to per-core licensing and away from per-machine licensing.

This is a typical response of server software manufacturers, because in essence, hardware has gotten a lot more powerful and is packing more cores per machine than was thought possible just a few short years ago. In 2008 and 2009, when quad-core processors were state of the art, a single license per machine was thought to be good enough.

But now, when you can buy processors with 20 (!) cores in them for a very reasonable price, Microsoft wants to be compensated for running Windows on them. After all, a 20-core single machine is the processing equivalent of five quad-core servers. You can see where the software giant might end up losing some revenue there. All told and on balance, the shift to per-core licensing itself is, if not welcome, at least understandable.

What is not understandable is the drastic increase in price. The issue is that, in Windows Server 2016, all servers have to have a license for two processors with a minimum of eight cores per processor. Yes, even your dual- and quad-core servers have to be licensed for 16 cores, even if they physically cannot support that number of cores.

Microsoft is making a lot of noise that, in the case of Windows Server 2016, licenses for servers that already have eight cores or fewer per physical processor will be the same price as they would have been in Windows Server 2012 R2. So even for the complexity and added features, you supposedly will not have a giant price increase.

But what about the future? What about Windows Server 2016 R2? What about your tiny servers in branch offices that have single or dual cores for which you have to buy a 16-core license? How will you keep track of this? It is enough to drive you batty.

And the prices themselves? Standard Edition -- which comes with only two Hyper-V virtual machine licenses and does not include the new storage features like Storage Spaces Direct, Storage Replica, Shielded Virtual Machines, the Host Guardian Service and the new network stack -- clocks in at $882. The Datacenter Edition, which includes unlimited Windows Server virtual machines running on the licensed Datacenter host and all of those new features, comes in at a steep $6,155.

Dig into the Datacenter Edition numbers a bit more, and you see that -- although it includes features like scale-out file servers, Storage Server Direct and more -- it also makes for a big invoice. The complaint is that pricing reaches the stratosphere with some fairly typical deployments.

IT pro and Microsoft MVP Aidan Finn said it best: "A 12-node [Storage Spaces Direct] cluster, probably with eight to 10 disks each, will require $73,860 of Windows Server Datacenter licensing.... I'm sorry, but I can probably get a SAN cheaper than that, and I certainly can deploy classic" scale-out file servers with JBOD for less than the total cost of ownership of Storage Spaces Direct. (In this case, pricing was figured by multiplying $6,155 x 12 nodes, and that's just for the software, not even the hardware. )

It is early days and the product is not even out. But Microsoft announced this pricing early and so, in my view, it's fair game to criticize it. When your marketing department prices a great solution higher than the solution it's meant to replace -- one that already exists and has been paid for by numerous organizations -- that great solution often dies on the vine.

I hope that doesn't happen here, and that Microsoft revisits either the core minimums, the pricing or perhaps even both.

The last word

Assessing the release overall, I would say that what we see in TP4 is a remarkable evolution of Windows Server into a DevOps, abstracted world where workloads take the focus and the various technicalities that power it all fade a bit. It feels like a natural step forward for Windows Server, especially as server hardware makes it way from every closet on every floor of every building to well-run, orchestrated data centers and clouds.

So that's an A in my grade book from a technical perspective. But from an overall value perspective: Incomplete. It would be a shame if Microsoft did not take a step back from the pricing and licensing changes before the final release of Windows Server 2016. If things stay as they are now, many shops won't join Microsoft with this step forward.

Watch this space, and talk to your Microsoft account reps.

Editor's note: This story was changed on February 3, 2016 to remove any ambiguity surrounding Docker management of Hyper-V containers.

Copyright © 2016 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon