Apple’s Latest Deal Shows How AI Is Moving Right Onto Devices


The iPhone maker’s purchase of startup is the latest move toward a trend of computing on the “edge,” rather than in the cloud.

Apple dropped $200 million this week on a company that makes lightweight artificial intelligence. It’s all about keeping an edge in AI … by adding more AI to the edge.

The acquisition of, a Seattle startup working on low-power machine learning software and hardware, points to a key AI battleground for Apple and other tech heavyweights—packing ever-more intelligence into smartphones, smartwatches, and other smart devices that do computing on the “edge” rather that in the cloud. And doing it without killing your battery.

“Machine learning is going to happen at the edge in a big way,” predicts Subhasish Mitra, a professor at Stanford who is working on low-power chips for AI. “The big question is how do you do it efficiently? That requires new hardware technology and design. And, at the same time, new algorithms as well.”

The most powerful AI algorithms tend to be large and very power hungry when run on general purpose chips. But a growing number of startups, among them, have begun devising ways to pare down AI models and run them on extremely energy-efficient, highly specialized hardware.

Last March, demoed a computer chip capable of running image recognition using only the power from a solar cell. A research paper authored by the founders of and posted online in 2016 describes a more efficient form of convolutional neural network, a machine learning tool that is particularly well suited to visual tasks. The researchers reduced the size of the network by essentially creating a simplified approximation of the interplay among its layers.

Apple already makes chips that perform certain AI tasks, like recognizing the wake phrase “Hey, Siri.” But its hardware will need to become more capable without draining your battery. Apple did not respond to a request for comment.

Now, AI on the edge means running pretrained models that do a specific task, such as recognizing a face in a video or a voice in a call. But Mitra says it may not be long before we see edge devices that learn too. This could let a smartphone or another device improve its performance over time, without sending anything to the cloud. “That would be truly exciting,” he says. “Today most devices are essentially dumb.”

Applying AI to video more efficiently, as has demoed, will also be key for Apple, Google, and anyone working in mobile computing. Cameras and related software are a key selling point for iPhones and other smartphones, and video-heavy apps like TikTok are popular among younger smartphone customers. Edge computing has the added benefit of keeping personal data on your device, instead of sending it to the cloud.

Dave Schubmehl, an analyst with the research firm IDC, says machine learning could also be used in Apple gadgets that currently don’t include AI. “I can see them running AI on the Apple Watch and in AirPods, to clean up sound for example,” he says. “There’s tremendous opportunity in existing products.”

Running sophisticated AI on video, like an algorithm that can tell what’s happening in a scene or add complex special effects, is usually done in the cloud because it requires a significant amount of computer power. “For example, adding synthetic depth of field to your photos might require running a deep network to estimate the depth of each pixel,” says James Hays, a professor at Georgia Tech who specializes in computer vision.

Besides making your iPhone’s camera smarter,’s technology could help Apple in other areas. Giving machines more ability to perceive and understand the messy real world will be key to robotics, autonomous driving, and natural language understanding.

“If the goal of AI is to achieve human-level intelligence, reasoning about images is vital to that,” Hays says, noting that roughly a third of the human brain is dedicated to visual processing. “Evolution seems to consider vision vital to intelligence,” he says.

Apple seems to think that a more evolved form of computer vision is pretty valuable too.

This article first appeared in

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: or visit

About Author


Will Knight is a senior writer for WIRED, covering artificial intelligence. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI boom. Before that, he was an editor and writer at New Scientist. He studied anthropology and journalism in the UK before turning his attention to machines.

Comments are closed.