Review: Deep dive into Windows Server 2016

Microsoft delivers a boatload of new virtualization, storage and security features, along with a nod to open source.

1 2 Page 2
Page 2 of 2

Documentation for docker container use is primitive in the Windows 2016 release, and in looking for what we could have possibly done wrong, we found we weren’t alone.

UEFI Linux Support in Hyper-V

Linux running on generation 2 VMs can now use the Secure Boot/UEFI option, a specification defined in UEFI. Secure boot was already possible for Windows VMs in previous versions of Hyper-V, but caused much growling in admins and installers when trying to use it with Linux distros.

We tried the UEFI with Hyper-V-based Ubuntu 16.04 and it worked easily. Secure boot is supported on Redhat Enterprise Linux 7.0+, SUSE Enterprise Server 12+, Ubuntu 14.04+, and Centos 7.0+. We found that Linux VMs must configure the instance to use MS UEFI Certificate Authority in the VM settings — and can also be set with PowerShell.

PowerShell Direct

PowerShell Direct allows PowerShell commands to be used on certain VMs without any network connectivity as a relationship between Hyper-V and its resident VM(s). This is available only from the host that the VM is running on and serves as a communications channel to unshielded VMs.

We were prompted to enter credentials, and if you are not logged in as a user in the Hyper-V Administrators group then you will not be able to use PowerShell direct. Lack of subordinated admin use seemed strange to us at first, but we can understand the constraints of mandating a Hyper-V administrator.

For now, the only supported operating systems are Windows 10 and Windows Server 2016. Both the host and guest have the same requirements. PowerShell Direct can be a useful tool for scripting or accessing a VM if networking is unavailable but with the OS support very limited, the use case for using PowerShell Direct is very narrow, unless you upgrade everything.

It also opens up a potential compromise if host Hyper-V credentials are somehow hijacked. This said, we’ve wondered about this kind of “hole-in-the-sandbox” in any number of use cases; instead, we’ve used other means to contact VMs that were having difficulties. The hole can be closed, of course but it also means: check every cloud instance to ensure that the hole isn’t open, along with the 20,000+ other things admins have to do.

Storage updates

Storage replicas are available to synchronously protect data shares, purportedly with zero data loss after instantiation. We did not test this – to do so would require multiple clusters in separate locations. The purpose of storage replica is disaster recovery between different sites and allowing more efficient uses of multiple data centers. It supports the ability to synchronously protect data with zero data loss, according to the docs. Also, it is possible to do asynchronous replication for longer ranges or high latency networks. It continuously replicates and is not snapshot/checkpoint based. This feature may be useful for companies with multiple campuses spread over a wide geography.

Storage QoS policies

QoS policies in Server 2016 (Datacenter edition only) are able to be used in two scenarios, both involve Hyper-V and all servers must be running Server 2016. One way to use QoS policies is with a Scale-out File Server and the other is to use Cluster Shared Volumes.

These policies are enabled by default on Cluster Shared Volumes and you do not have to do anything special to enable them. However, modifying the policies lets you fine-tune your server’s storage performance. Some of the options include ways to mitigate noisy neighbor issues, monitor end to end storage performance and manage storage I/O per workload. For example, you could set Minimum and Maximum IOPs on a per VHD basis (as a dedicated policy). Or you can create an aggregated policy which is shared between all the VHDs assigned that policy. We didn’t test this totally, but we checked some of the commands to see that the QoS was indeed running on the cluster with Storage Spaces Direct.

Storage Spaces Direct (S2D)

Storage Spaces Direct is part of the clustering technology in Windows Server 2016. S2D uses Windows 2016 Datacenter edition (even in the Nano Server incarnation  — but watch the licensing costs!) servers having local storage (i.e. JBOD availability) to build highly available and scalable software defined storage using SMB3, clustered file systems and failover clustering. Storage must be clustered using the Failover Role and its clustering file system.

SCORE CARD

The system requirements for storage spaces direct are pretty high: you’ll need 128GB RAM, two SSDs and at least four HDDs configured in a non-RAID setup plus an additional HDD for the boot drive. Also, you’ll need at least two of these servers setup in a cluster. It is also recommended to have 10GBE ports on each machine. (See the complete hardware requirements.)

We ordered up and used two special Lenovo x3650 m5 ThinkServers with 128GB of RAM, two 240GB SSD, six 300GB HDDs that met the requirements to test the theory. We could set different storage tiers, and by default if SSD and HDDs/conventional drives are present, S2D will automagically create a performance and a capacity tier for hybrid storage.

The Lenovo servers we used were setup in a failover cluster, which is a necessary and mandatory step. This means: minimum two servers, although they needn’t be identical. We made sure the extra storage SSDs and HDDs are online and initialized in Disk Management, but were otherwise unallocated and empty. Our installation and use of this failover cluster (remember the madness of Microsoft’s Wolfpack?) was pretty painless. We then used the Enable-ClusterS2D PowerShell command on one of the clustered nodes and it added all the available unused storage from all the server nodes to a pool of disks.

We could see all the disks it used for this pool in the Failover Cluster Manager. After that, one must create one or more volumes. They can be created in the GUI, but the GUI didn’t allow us to set the filesystem or storage tiers.

We created a volume using Resilient File System. This is an example of how to create the volume with ReFS filesystem format as a Cluster Shared Volume (CSV) with the PowerShell command:

            New-Volume -StoragePoolFriendlyName “S2D*” -FriendlyName test2 -FileSystem CSVFS_ReFS -StorageTierFriendlyNames Capacity -StorageTierSizes 100GB ((EDITOR NOTE: this is one long string not three lines))

Once the volume was created, we could use it to create a VM cluster role with the VHD stored in the S2D storage cluster location (in our case it was: C:\ClusterStorage\Volume1). This storage geography can be seen by all nodes of the cluster. We were successful in creating and running a Server 2016 VM in Hyper-V and live migrated between servers easily. It was very quick too, finishing the migration in mere seconds.

Failover clustering – new and improved

We found many improvements to failover clustering in Server 2016. One of the most interesting is the Cluster Operating System Rolling Upgrade functionality. If you already have Windows Server 2012 R2 cluster nodes, then you can upgrade the cluster to Windows Server 2016 without having to stop Hyper-V or Scale-out File Server workloads. Another interesting feature is using a Cloud Witness for the quorum witness (failover logic) using Azure to store the witness disk. Another improvement that looked interesting was the VM load balancing feature. This can help even the load by checking which nodes are busy and automatically live-migrate VMs to other nodes.

New Windows 2016 Security measures Credential Guard

In previous versions of Windows, credentials and other secrets were put in a Local Security Authority (LSA). Now, with the new Credential Guard feature, the items that used to be in the LSA are now protected by a layer of virtualization-based security. This is used to prevent “pass the hash” and “pass the ticket” attacks. It does this by insulating the secrets, such as NTLM password hashes and Kerberos ticket granting tickets, so that only privileged system software can acquire them.

Derived domain credentials that are managed by Windows services are run in the virtualized protected environment. This environment is not able to be accessed by the rest of the OS. This feature can be managed using group policy, WMI, PowerShell, or even a command prompt. This feature also works in Windows 10 (Enterprise or Education) and Windows Enterprise IoT. However, there are certain basic hardware requirements for this feature to work: 64-bit CPU, CPU virtualization extensions enabled (Intel VT-x or AMD-V) and SLAT, TPM 1.2 or 2.0, UEFI 2.31.c+ with Secure Boot.

Just Enough Admin (JEA)

JEA is a PowerShell-based (included with Version 5 and up) security kit that can limit privileges for admins to just enough for them to do their job. It allows users to be specifically authorized to run certain commands on remote machines with logging. This runs on Windows 10, Server 2016 and older OSs if they have the Windows Management Framework updates. JEA combined with Just In Time admin, introduced in server 2012 R2 and part of Microsoft Identity Manager (product page), allows one to limit an admin in both time and capability.

Network Controller (SDN - Software defined networking)

The Network Controller allows for a more centralized approach to network management. It provides two APIs. One lets the Network Controller communicate with the network and the other API allows you to contact the Network Controller directly. The Network Controller role can be run in both domain or non-domain environments.

Network Controller can be managed with either SCVMM or SCOM. The Network Controller role lets you configure, monitor, program or troubleshoot the underlying infrastructure that is managed by SCVMM or SCOM. However, it is not strictly necessary to use those tools, as we could also use PowerShell commands or the REST API.

Network Controller works with other parts of the network infrastructure such as Hyper-V VMs and virtual switches, the data center firewall, RAS gateways, and software load balancers. Because this is a Windows Server review and not an SC-VMM or SC-OM review, we didn’t test this.

Data center firewall

The data center firewall is a new distributed firewall based on network flow and app connectivity instead of where the workload is actually present. For example, if you migrate a VM from one server to another in the data center, it should automatically change the firewall rules on the other server to allow whatever ports need to be open, reconfigure routers and switches for that VM. The firewall also offers protection to be used on VMs independent of the guest OS. There is no need to separately configure a firewall in each VM.

This means of VM metadata control is an idea also advanced by VMware to permit high VM portability with a minimum of muss and fuss.

These new security features are very highly competitive with announcements from VMware made at VMWorld 2016 — especially designed to advance a control plane for completely objectifying workloads in such a way as to make all elements — compute, networking, storage, and other characteristics — into an (atomic) object for purposes of manipulation, movement, storage, and management and control plane needs.

Summary

In our working test of Windows 2016, we found attempts to cover a lot of turf. We see the new raw edges, but also a different thinking in terms of workload and developer strategy meeting the long-installed capital costs that enterprise fixtures represent.

In other words, Windows 2016 is serving numerous masters, some of them very well, and some are in a race with a blistering pace of change in developer and rapid infrastructure deployment strategies.

Windows Server 2016 has a lot of new and improved features, including attempts to use competitive concepts — largely from Linux. We were able to test some of these, but not all. Some require certain hardware or setups (including ones that needed System Center pieces to work more efficiently). If you cut Systems Center combos from the long list of features in the product announcements, it’s still interesting.

Special thanks to Lenovo for loaning two fully equipped, S2D-capable servers for testing.

How we tested

We tested Windows 2016 Server editions in our lab and in our NOC at Expedient in Carmel IN. We tested Datacenter, Standard, and Nano servers as native, Hyper-V VMs, and VMware 6 VMs on HP Gen4, Gen8, and Gen9 servers, Lenovo RD460 and two Lenovo x3650 m5 Thinkservers (128GB of RAM, two 240GB SSD, six 300GB HDDs) accessed through the NOC’s backplane (Extreme Summit-Series GBE and 10GBE switches), with an HP MicroServer (as an AD controller that also served as a VPN touch point) with clients ranging from Windows 7 through Windows 10, MacOS, and Linux (Ubuntu, Debian, and CentOS).

This story, "Review: Deep dive into Windows Server 2016" was originally published by Network World.

Copyright © 2017 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon