Will Apple play nice with others to make Siri smarter?

We’re going to hear lots about embedded vision product development during the Embedded Vision Summit, but the first announcement may have implications for Apple's machine learning systems.

Apple, Khronos Group, AI, Siri, iPhone, machine learning
Thinkstock

We’re going to hear lots about embedded vision product development during the Embedded Vision Summit, but the first announcement may have implications for Apple's machine learning systems.

Apple and embedded vision

We know Apple is interested in embedded vision and machine learning following its acquisition of PrimeSense and introduction of ARKit.

We also know this because it has already placed embedded vision solutions such as scenes and items recognition within the Photos app.

Given that we keep hearing that Apple’s machine learning teams have convinced the company to let them work with their peers as they develop next-generation machine learning technologies for its products, a new initiative from the Khronos Group may — or may not — turn out to be significant.

The Khronos Group is a non-profit industry group dedicated to creating and curating open standards to enable graphics, rich media and parallel computation across numerous platforms and devices.

Apple does not control the group, which also includes many of its biggest competitors, but it does have a seat on the company board.

That seat is currently occupied by Geoff Stahl, Apple’s director of games and graphics software engineering. Stahl is responsible for putting advanced games and graphics technology inside Apple’s products, according to his bio.

Let’s work together

The Khronos Group today announced its engagement of Au-Zone Technologies to enable NNEF (Neural Network Exchange Format) standard files to be used with leading machine-learning training frameworks. It follows the group’s introduction of new open-source tools to port Vulkan applications to Apple's platforms earlier this year.

“The goal of NNEF is to enable data scientists and engineers to easily transfer trained networks from their chosen training framework into a wide variety of inference engines,” the description reads.

The meaning seems to be that the move is going to help developers create machine intelligence solutions for embedded vision systems that work across different platforms. (It’s similar to the ONNX standard, with differences explained here.)

That’s important to developers on every platform, particularly for Apple as it ramps up its machine learning investments.

We don’t know if Apple will be taking part in the newly announced Khronos initiative, but it seems reasonable to imagine its experts will at least take a look at it. Though history shows its love of proprietary solutions means it chose to adopt its own Metal graphics format above OpenGL.

What happens next?

There’s a relatively famous Dr. Who moment in which the Tenth Doctor (played by David Tennant) talks about “Wibbly-wobbly, timey-wimey stuff,” and that’s about as accurate as anyone can really be when talking about the potential impact of machine intelligence across billions of devices, which is what’s being discussed here.

In Apple’s case, you can imagine these technologies working hand-in-glove with the Photos app to deliver more accurate information about the people, places, and moods of items in photographs, and that kind of information will likely also be made available to third-party developers.

After all, if your iPhone can recognize items in a photograph you take, surely it can also detect similar information simply by pointing your camera wherever you happen to be, so instant sign translation engines could one day become an iOS “feature.”

The next inevitable step will surely be to enable the machine learning systems inside smartphones to read your emotions to provide useful recommendations and/or respond to requests made using gestures such as eye contact and head movement, alongside more conventional touch and speech-based navigation.

Such technologies most certainly match Apple’s steady progress to creating alternative user interfaces for different families of computational project, and very definitely they have implications within its industry-leading accessibility technologies.

One of the bigger limitations Apple has faced in developing effective AI has been its lack of access to as rich an information stack as some competitors hold.

The idea that using NNEF may enable it to harness additional data sets to supplement its own — and all while maintaining customer privacy — could turn out quite significant on the company’s road ahead. That's assuming the company decides to change its habits and work a little more closely with others on this part of its mission to make Siri smarter.

Google+? If you use social media and happen to be a Google+ user, why not join AppleHolic's Kool Aid Corner community and get involved with the conversation as we pursue the spirit of the New Model Apple?

Got a story? Please drop me a line via Twitter and let me know. I'd like it if you chose to follow me there so I can let you know about new articles I publish and reports I find.

Windows 7 to Windows 10 migration guide
  
Shop Tech Products at Amazon