Why you should not get excited over Windows Server on ARM

There is a long string of failures behind these efforts.

microsoft stock campus building
Stephen Brashear/Getty Images for Microsoft

The tech world went a little crazy last week over the news that Microsoft was conducting internal tests of Windows Servers on ARM processors. Microsoft isn't saying what it plans to do beyond its internal use, and with good reason. The low-power CPU has been a failure on the server.

While the blogosphere has been rife with speculation of what Microsoft may do, like containers or PaaS, the history of low-end chips in servers is one of market rejection. There have been multiple attempts at both ARM and Atom-based servers and none of them have gone anywhere.

For some time there's been an effort to bring ARM to the server. And that meant a lot of work. ARM is designed for embedded use and was up until recently a 32-bit processor, meaning it was limited to 4GB. No one wants to go back to the bad old days of 4GB servers, where you had almost no overhead once the OS was loaded.

So ARM and licensees have been playing a frantic game of catch-up, first porting the cores to 64-bit, then building the ecosystem around it, for things like memory management and ECC and other features needed in a high availability server but not a smartphone. In that time, Intel has only furthered its lead and AMD is finally getting back in the game.

The highest-profile example of ARM for servers was Calxeda, which aimed to make ARM-based server processors. In 2011 it announced plans for a 480-core server in development, consisting of 120 quad-core ARM Cortex-A9 CPUs. In 2013 it crashed and burned, effectively shutting down. It assets were acquired by a company called Silver Lining Systems. How often do you hear about them?

HP had an ARM server project called Redstone, as part of a bigger effort known as Project Moonshot, launched in 2014. If you look at Moonshot now, the servers are running AMD Opterons and HP Enterprise no longer talks about the ARM effort.

Finally there was SeaMicro, which came out with an ultra-dense 10U server packed with Intel Atom processors. They made the same pitch as the ARM people: low power draw, greater price-performance than Xeons, can be used in applications where you don't need that big honkin' Xeon drawing so much power.

AMD acquired the company in 2012 for $334 million and unceremoniously shut it down in 2015. They didn't even bother to try and sell it -- there was no value in it. They just killed the whole line. I asked them why they didn't try to sell it and they said there was no interest.

In talking to analysts about these failures, they said something interesting. The low-power chips had scalability issues, which is to be expected, so that limited their overall application. But more importantly, early adopters found that they could just partition up powerful Xeons through virtualization and get the same use, if not better. So why go with the new, unproven platform when you have a solid, mature platform available to you?

Thanks to advances in virtualization software, it became more feasible and easier to just take a two-socket or four-socket Xeon server running Xeons with 10, 14, 18 and even 22 cores and partition them up just like you would a server running a bunch of Atoms or ARMs -- only here, if you need the horsepower, you could switch off the partitions and bring the full CPU to bear as needed. And Xeon is not a platform work-in-progress like ARM server tech is. It's mature and solid and well-supported.

There are some Atom-based servers out there. SuperMicro has a bunch of servers based on the Atom, but they are for very specific low-end use, like file and print servers. I suspect Microsoft will quickly realize what Calxeda, HP and AMD realized.

There is very little market or application for low-end processors on a server.

Copyright © 2017 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon