How Google made 'zoom and enhance' a reality

You know those shows where the cop looks at a pixilated image of the bad guys and asked the computer tech to zoom in and enhance the image?

Well, it seems the team at Google Brain has, and they wanted to make it actually possible to do that. So, being Google, they did -- with an assist from neural networks. But how does it work?

In IT Blogwatch, we zoom, then we enhance. 

So what is happening? Justin Duino has the background:

Google Brain’s...team has been working on using...neural networks to enhance images, and now, it has been able to bring the infamous “zoom and enhance” abilities from sci-fi to reality...While this isn’t as crystal clear as the fake stuff is in the television shows, Google Brain is capable of taking an 8×8 pixelated image and “add” details that would have been impossible to see before.

Sounds interesting, but how does it work? Rhett Jones has some details:

Neural networks are our best chance...to truly increase the level of detail in a low-resolution image. We’re stuck with the pixel information that a photo contains but deep learning can add detail through what are commonly referred to as “hallucinations.” This...means a piece of software making guesses about an image based on the information it’s learned from other images.
...
Google Brain...recently published the results of their latest progress with “pixel recursive super resolution” and despite the results looking horrifying, they’re extremely impressive.

But how does the team at Google Brain actually do all this? Sebastian Anthony is in the know:

First...the conditioning network tries to map the the 8×8 source image against other high resolution images. It downsizes other high-res images to 8×8 and tries to make a match...second...the prior network uses an implementation of PixelCNN...add realistic high-resolution details to the 8×8 source image. Basically, the prior network ingests a large number of high-res real images...Then, when the source image is upscaled, it tries to add new pixels that match what it "knows" about that class of image.
...
To create the final...image, the outputs from the two neural networks are mashed together. The end result usually contains the plausible addition of new details.

And when does this technology get disseminated to all the local CSI units? Sean Hollister has some bad news:

Unfortunately...Google...tells us this was a "one-off research exploration," and has no current plans to use it...also...Google's computers knew that they were looking at faces...to begin with.

Anything else? Karl Harper sees an unexpected benefit from all this:

So this will help people with pixelated faces integrate into society, right?

Copyright © 2017 IDG Communications, Inc.

Shop Tech Products at Amazon