Apple's Machine Learning is Catching Up to Google's
At a recent invite-only meeting Apple detailed the state of its machine learning research. Although they're not applying it in a search engine the way Google is, they've been producing machine learning algorithms for the Siri voice assistance, and now they've shed some light on future projects.
In many ways, Apple's machine learning research is pointed in the same direction as Google and other competitors'. Apple is developing machine learning for image identification, voice recognition, and user behavior prediction. If you've used Siri before, you'll recognize these.
In the future, though, Apple wishes to develop "volumetric detection of LiDAR," which is a fancy way of saying "measuring and identifying objects with lasers." LiDAR allows surveying a target's distance from the point of origin of the laser, and using LiDAR in arrays is a big part of automated automobile guidance technology.
Apple claims now that their GPU-based image recognition machine learning algorithms could process twice as many photos per second as Google's system, topping out at 3,000 images a second. Apple also stated that their computing array was more efficient and noted that it used only one-third the GPU's Googles used.
Apple is also working on parent-teacher neural networks where a larger neural network can transfer a neural network to a smaller device without compromising the decision-making ability of either. Apple is gearing up to speed all this research up as well. They've announced that for the first time Apple scientists and researchers would be allowed to publish and collaborate with the general research community.
In the next few years as machine learning becomes and even bigger part of our lives, you can bet that Apple will be at the forefront. With rumors about self-driving cars someday one of your iDevices might be a lot bigger than the one in your pocket.