Review: Microsoft takes on TensorFlow

Microsoft Cognitive Toolkit is fast and easy to use, but a little wet behind the ears

brain nerves connections
A Health Blog (CC BY-SA 2.0)
At a Glance

Like Google, Microsoft has been differentiating its products by adding machine learning features. In the case of Cortana, those features are speech recognition and language parsing. In the case of Bing, speech recognition and language parsing are joined by image recognition. Google’s underlying machine learning technology is TensorFlow. Microsoft’s is the Cognitive Toolkit. 

Both TensorFlow and Cognitive Toolkit have been released to open source. Both are complex frameworks that implement many neural network and deep learning algorithms. Both present challenges to developers new to the area. Cognitive Toolkit has recently become easier to install and deploy than it was, thanks to an automatic installation script. Cognitive Toolkit may be a little easier to use than TensorFlow right now, but that is balanced by TensorFlow’s wider applicability.

The Microsoft Cognitive Toolkit (formerly known as CNTK, the Computational Network Toolkit) is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph. The new version, CNTK v.2.0 Beta 1, can now be used as a library with new C++ and Python APIs. It retains its use of BrainScript as its own language for configuring models. The CNTK core libraries are written in C++.

The Python API in particular helps bring the Cognitive Toolkit to mainstream Python-writing, deep learning researchers. The API contains abstractions for model definition and compute, learning algorithms, data reading, and distributed training. As a supplement to the Python API, CNTK 2 has new Python examples and tutorials, along with support of Google protocol buffers serialization. The tutorials are implemented as Jupyter notebooks.

CNTK 2 supports the Fast R-CNN algorithm, an object-detection algorithm proposed by Ross Girshick in 2015. Fast R-CNN builds on deep convolutional networks and adds a region of interest pooling scheme that allows it to reuse the computations from the convolutional layers.

CNTK 2 sports improvements in the CNTK evaluation library, including the use of the CNTK APIs, as well as support for multiple threads and evaluation on a GPU device. The evaluation library is used after training is complete.

Microsoft Cognitive Toolkit features

The Cognitive Toolkit retains all of CNTK version 1’s features and adds the features I discussed above. It is still production quality, open source, multimachine, multi-GPU, and highly efficient for neural network training to recognize and classify speech, images, and text, and it still scales from CPUs to GPUs to clusters. The Cognitive Toolkit is the underlying technology for Cortana, Skype live translation, Bing, and some Xbox features.

CNTK components can handle multidimensional dense or sparse data from Python, C++, or BrainScript. The Cognitive Toolkit includes a wide variety of neural network types: FeedForward (FFN), Convolutional (CNN), Recurrent/Long Short Term Memory (RNN/LSTM), Batch normalization, and Sequence-to-Sequence with attention, for starters. It supports reinforcement learning, generative adversarial networks, supervised and unsupervised learning, automatic hyperparameter tuning, and the ability to add new, user-defined, core components on the GPU from Python. It is able to do parallelism with accuracy on multiple GPUs and machines, and it can fit even the largest models into GPU memory.

cntk download

The Cognitive Toolkit automatic binary download with GPU and 1-bit Stochastic Gradient Descent (SGD) support requires you to first accept the licenses for the Toolkit, the 1-bit SGD component, two CUDA components, OpenCV, zlib, and libzip. After that you mostly have to follow the directions. It all takes about 20 minutes.

The APIs support defining networks, learners, readers, training, and evaluation from Python, C++, and BrainScript. They also support evaluation with C#. The Python API interoperates with NumPy and comes with a high-level layers library that enables concise advanced neural networks definition, including recurrences. The toolkit supports representation of recurrent models in symbolic form as cycles in the neural network instead of requiring static unrolling of the recurrence steps.

In theory you can train CNTK 2 models on Azure networks and GPUs. GPU support requires the N-series family of Azure Virtual Machines, which are not enabled in all accounts, as this series is still in preview. I had to ask for GPU support for my account; it took about a day to come through, although your request might not be answered at the same speed.

Installing Microsoft Cognitive Toolkit

CNTK 2 has new automated installation procedures for Windows, Linux, and Linux Docker containers. The Windows binary installation, which took me about 20 minutes on a local computer, was much easier and faster than the previous manual installations, despite the repeated need to approve steps for security purposes.

I wanted to test the Microsoft Cognitive Toolkit with a GPU if possible. I thought that CPU-only training of deep networks would probably be slow, based on previous work with Google TensorFlow.

Unfortunately, the one Windows box I have with a CUDA-compatible GPU is about six years old, and CNTK 2 didn’t recognize its Nvidia GeForce GTX 260 as being powerful enough to use. That didn’t surprise me. Other neural network toolkits, such as TensorFlow, expect a least a GeForce GTX 650. My mid-2012 MacBook Pro has a GTX 650M, but CNTK 2 doesn’t currently support MacOS at all, and VMware Fusion doesn’t currently seem to pass the use of the GPU through to Windows or Linux VMs. (I tried.) That left me facing CPU-only training times.

cntk2 install

After you unzip the first download, the CNTK 2 binary installation requires you to modify your machine’s script permissions as an Administrator, then install in a standard PowerShell. Unlike what I did on my first try, you should unzip to c:/local/ if possible, for later convenience activating the Anaconda CNTK-PY34 environment.

Enter Azure. Once the Azure team granted me permission, I created an NC6 VM with six CPU cores, 56GB of RAM, a 340GB disk, and one Tesla K80 GPU (half a K80 board) with 2,496 processor cores, running Windows Server 2012 R2. Windows Server 2016 TP5 and Ubuntu 16.04 LTS were also available.

Given that the N-series VMs are still in preview, I needed to create the actual VM from a command-line interface, although I was able to use the Azure console to create the resources needed by the VM prior to its creation and manage the VM after creation. The NC6 VM costs 66 cents per hour, and Azure VMs are only billable while they are allocated. You can stop and de-allocate an Azure VM from the console, then restart it when you need it.

Although it took me more than an hour of trial and error to figure out the correct command line for creating the VM, it only took a few minutes to fire it up. Once I had connected to the VM and used its locked-down version of IE to download Chrome, I was able to download and install the CNTK 2 Windows binary in about 10 minutes.

cntk anaconda

To run Python Cognitive Toolkit examples, open a command prompt, run your CNTK-PY34 script, navigate to the bindings\python\examples directory of your CNTK repository, and run your chosen examples from the Python command line. The FeedForwardNet sample completes its first epoch in a few seconds, even on an old Athlon.

A full K80 board often improves training speeds over CPU-only processing by a factor of about 10; a half-board should give a factor of about five. I saw roughly a threefold improvement for MNIST feed-forward training, but MNIST digit identification is a relatively tractable problem, and feed-forward classification is a relatively simple algorithm. CIFAR-10 should be a better test of the speed-up from the GPU, but I don’t think it’s fair to do rigorous speed benchmarks on beta products, so I’ll wait until the release version of CNTK 2 to run that comparison.

Running Microsoft Cognitive Toolkit

Precisely how you run CNTK 2 depends on which language (Python, C++, or BrainScript) you are using, whether you are running a Python Jupyter notebook, and whether you are training a network or doing evaluations. The figure above shows how to activate the Anaconda environment for Microsoft Cognitive Toolkit and run a command-line Python example from the repository.

Running Jupyter notebooks from within the activated environment is simply a matter of navigating to the correct directory and running jupyter notebook. The server will fire up in the console, and the notebook directory will open in your default browser, as shown below.

Note that CNTK needs an exclusive lock on a GPU -- it won’t share the GPU with other processes. If it can’t get a lock on the GPU, it will generate a message and use the CPU.

cntk 101

Several of the CNTK 2/Microsoft Cognitive Toolkit tutorials are supplied as Jupyter notebooks. The figure shows the visualizations plotted for the training of the Logistic Regression tutorial.

Learning Microsoft Cognitive Toolkit

Microsoft has provided seven tutorials and about 30 models for the Microsoft Cognitive Toolkit. It’s clear from the few days I spent with the new version of the toolkit that its core is close to parity with Google TensorFlow's for deep learning and for the Python API, although TensorFlow is more widely applicable and better fleshed out at this point.

I’m afraid the Cognitive Toolkit learning materials still need some work or rework. I actually see some of the articles in the repository improve on a daily basis, which is not surprising for a beta release.

One of the tutorials is a guide to embedding a trained CNTK model into a C# Web API and deploying it on Azure. Unfortunately, the tutorial uses the CNTK NuGet packages, which were removed for CNTK 2 beta 1. Either the NuGet packages will be updated and added back to a future release, or the tutorial will be revised.

The other six tutorials use Python; all but one are also available as BrainScript recipes. Comparing the Python and BrainScript versions of the same tutorial makes me appreciate both the simplicity of BrainScript and the power and flexibility of combining the CNTK 2 library with other Python libraries for scientific computing and graphics.

The other tutorials cover using logistic regression and a feed-forward network to perform classification on synthetic data; preparing MNIST OCR data and analyzing it with a feed-forward network; preparing CIFAR-10 image data and analyzing it with a ResNet (deep residual network) convolutional classifier; and analyzing ATIS air traffic information with a language understanding long short-term memory model.

Twice as nice

The Microsoft Cognitive Toolkit is clearly improved over the earlier versions of CNTK, not least in its support for Python, the mainstream language for neural network research. The current release is fairly designated Beta 1, however. The documentation is not all in sync with the code, bug fixes appear in the repository on a daily basis, and useful features from CNTK 1 have been removed, at least temporarily.

The case for using the Microsoft Cognitive Toolkit is strengthened by the availability of Azure NC-series VMs with Nvidia Tesla K80 GPUs for training models at scale. It is also supported by evaluation libraries for trained models that you can integrate with web applications and host in the cloud if you wish.

I also like the new automated deployment options for Windows and Linux. On the other hand, it can’t be that hard to expand from Linux support to MacOS support. I wish the “new” open-source-friendly, system-agnostic Microsoft would actually do so.


Cost: Free open source. Platform: 64-bit Windows (Windows 8.1 and Windows Server 2012 R2 have been tested extensively); 64-bit Linux (Ubuntu 14.04 LTS has been tested extensively); Docker on 64-bit Linux. The standard production Windows build environment requires Microsoft Visual Studio 2013 (not 2015) with Update 5; the production Linux build environment requires g++ or another C++ compiler (GNU C++ 4.8.4 has been tested). Installation on Azure is supported for Windows and Linux VMs and Linux Docker containers. Supports recent Nvidia GPUs if available.

InfoWorld Scorecard
Models and algorithms (25%)
Ease of development (25%)
Documentation (20%)
Performance (20%)
Ease of deployment (10%)
Overall Score (100%)
Microsoft Cognitive Toolkit v2.0 Beta 1 8 9 8 10 9 8.8

This story, "Review: Microsoft takes on TensorFlow" was originally published by InfoWorld.

At a Glance
  • Microsoft’s Cognitive Toolkit is an open source, unified deep learning toolkit with new C++ and Python APIs.


    • Good variety of models and algorithms
    • Excellent performance on hardware with GPUs
    • Excellent support for Python
    • Declarative BrainScript neural network configuration language
    • Automated deployment available for Windows and Ubuntu Linux


    • Documentation is in flux
    • No support for R or Lua
    • No Macintosh support

Copyright © 2016 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon