Review: Microsoft Windows Server 2016 steps up security, cloud support

The new OS bakes in advanced features for security and software-defined networking

1 2 3 4 Page 2
Page 2 of 4

The verdict on Nano Server

From my point of view, Nano Server is very interesting from a technical perspective, but I think it is too early to call in terms of popular adoption. To me, Nano Server is a solution for the largest enterprises that are already all-in on the cloud and hyperconvergence, and want to take the next step to achieve higher density and better scale. Or it's for very small startups that have already bought in on the DevOps lifecycle and containers and are searching for the smallest wholly contained operating system they can get their hands on to make deployment easier.

Most of the companies I am familiar with will find Nano Server too limited in its current form to make a real difference yet. But the savings in size and attack surface afforded to Nano Server from the major refactoring effort portend really powerful, technically astute things for Windows Server in general in the years to come. In short, watch this space, because in 10 years I suspect we'll all be running Nano Server-like OSes.

Software-defined networking updates

One of the areas of real change within Windows Server 2016 is the improvements to software-defined networking capabilities. What does that mean, you ask? Essentially, software-defined networking attempts to do for cables, routers, switches, and other network gear what the hypervisor did for physical computers -- virtualize them and, specifically, abstract away the actual physical layer. This means that connections and networking configurations can be changed on the fly in an automated way without physically restringing cable and moving around network appliances from rack to data center.

This is where some experience with the cloud, and in particular Microsoft Azure, will help. When you create resources within your Azure subscription, you can create them attached to various virtual networks. These networks can be private -- for example, connected only to the virtual machines and resources in Azure and not publicly accessible. Or they can be public, and you can even plug in virtual appliances like routers and load balancers within your virtual network.

It is a full network, but one you configure and define through software, and one you can easily change through software as well, either manually or via automated scripting, such as with PowerShell. These virtual networks are essentially sandboxes for Azure virtual machines to run in, perfectly secured from outside control until you specifically allow it.

All of that SDN capability in Azure is essentially baked into Windows Server 2016 now: The product virtualizes the entire customer network for Azure agility. This includes features to virtualize switching and routing, load balancers, firewalls, edge gateways and other physical appliances. What's specifically new in 2016 is support for VXLAN encapsulation -- think of this as VLAN tagging on steroids designed especially for converged networks -- and provisioning of policies using OVSDB, an industry standard, as the protocol.

The network controller

It starts with the network controller, a central point of automation within your SDN. The network controller is the brain of the operation and sits in the middle in order to push network states down into the network for enforcement. Logically, there is a management plane, where a user is defining policies they want applied. That in turn goes down to a control plane, which distributes those policies to the endpoints -- basically the virtual network devices and their respective physical network devices -- and then the data plane, which is whatever the operating system is running on the endpoints themselves that receives the policies.

The network controller operates on the management and control plane, but unlike a gateway or a proxy or other "pass-through" filter, it doesn't sit in the data path. This lets VMs communicate between each other directly, because the policies enforce themselves on the end nodes on their own.

Connectivity models for SDN in Windows Server 2016

Windows Server 2016 supports multiple connectivity models. These are intended for companies that have a more advanced deployment model -- that is, shops that have virtual networks in production, but that want to deploy a tier of the workload in the cloud for obvious reasons.

For these shops, WS 2016 supports these models:

  • Site-to site-gateways that create an encrypted tunnel on the Internet between cloud and corporate network
  • Private MPLS WAN based connectivity for those companies not wanting to tunnel over the public Internet
  • Azure ExpressRoute for those with the resources to create a dedicated link between their network and an Azure data center

Customers who are not running Azure can use GRE tunneling to the public cloud provider of their choice. (Generic Routing Encapsulation is basically a standard way to wrap packets inside a point-to-point tunnel.) Within the cloud itself you might have resources that are non-network virtualized, and for that there are forwarding gateways that allow you go out of your virtual network and grab that.

Cloud load balancer

Also new in Windows Server 2016 is what Microsoft calls a "cloud-optimized load balancer." At its core, this new feature is an all-software load balancer that is built from the ground up to satisfy unique cloud requirements. These include creating and deleting lots of virtual IPs rapidly and on demand as tenants are added and removed, and scaling out load-balancer instances across a network to eliminate bottlenecks, dynamically creating more multiplexers through which traffic passes and then removing them as demand changes. Another use case might be the need to sometimes load-balance the load balancers, both in terms of balancing across all of the instances and also using multitenancy to support multiple cloud tenants with one virtual load balancer.

The new software load balancer satisfies all of these needs, and features new direct-server return technology that lets return traffic bypass the multiplexer after it has initially been routed and balance to avoid further load.

1 2 3 4 Page 2
Page 2 of 4
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon