The Difference Between Machine Learning, Deep Learning and Science Fiction


With the world of artificial intelligence (AI) developing so rapidly, it’s not surprising that many people are unclear about the difference between the various kinds of data analysis and how they can drive business. The distinction between machine learning (ML) and deep learning (DL), for example, can be a bit confusing to the uninitiated, but it makes all the difference for companies trying to harness the reams of data they collect, notes this opinion piece by Adam Singolda, CEO and founder of Taboola.

You’ve heard this before:

Q: How do you do what you do?

A: “AI.”

Hardly a day goes by without news of another company’s latest foray into artificial intelligence. While the value of AI may be self-evident in consumer technology products like Cortana or Spotify, can it really benefit everything from toothbrushes to burger joints or rap lyric generation? And is it so easy to do that any company under the sun has AI in their tagline?  How can we separate “real” Deep Learning (AI) from marketing science fiction (SF)?

Perhaps a good first step is to define some key terms. There are two popular methods that are often confused or used interchangeably: traditional machine learning and deep learning, also known as “AI.” ML calls on engineers to pre-define what patterns they’re looking for in the data, (i.e. “people who did this also did that”). DL, alternately, makes use of advanced “neural networks” that proactively discover new patterns and get better over time. But more importantly, it requires a whole different engineering mindset (“the magic”). More on that magic below.

Traditional ML Is not AI

DL is AI. Most companies don’t use DL/AI.

Neural networks have powered some truly life-changing breakthroughs in recent years. Translation services empower individuals to toggle languages with the click of a mouse, heralding a world where language barriers are a thing of the past. But why stop at just alphabetic language: One day soon, DL pattern recognition may even help us infer what babies, or even our pets, are trying to express but unable to articulate. In medicine, biometric recognition software is already forging more seamless connections between people and their devices, helping doctors better diagnose diseases before they manifest or metastasize. DL based computer vision will soon make Elon Musk’s dream of autonomous cars a reality (even though I think a bit differently).

Given this revolutionary potential, there is good reason for why everyone and their cousin appears to be adding AI to their pitch or business plan, and we would all do the same. The prize is not small — McKinsey predicts that by 2020, AI will create north of $13 trillion in market value. Looking at companies such as Netflix and IBM, or products like Cortana and Alexa, tens of billions have already been generated from an AI-powered personalization economy enabling people to better curate their day-to-day life.

“While traditional ML techniques are now common across industries and even taught in high school engineering classes, true DL methods are in fact exceedingly rare to find.”

But as with any gold rush business opportunity, we should all be wary of efforts that are more hype (or science fiction) than substance. While traditional ML techniques are now common across industries and even taught in high school engineering classes, true DL methods are in fact exceedingly rare to find.

You Know It When You See It

As Gil Chamiel, Taboola’s head of DL/AI tells me, three things are required to perform true DL, and which can help general observers, investors or partners discern between AI and science fiction. The first two may sound relatively straightforward but aren’t always easy to obtain. The third one is the “black magma,” where the magic happens:

  • access to data and activity;
  • computational power; and
  • mindset.

Let’s review those. First, DL only works if there is a certain threshold of environmental activity. In order for neural networks to discover new patterns, they need a large amount of data that can be processed and sorted through trial and error. These inputs vary across companies and industries: For example, Walmart might analyze data from the 100M+ people entering a store or visiting their site every month, whereas Tesla might focus on feedback generated by thousands of cars as they move around.

Second, assuming you have a meaningful amount of data, is computational power. While this used to be a capital-intensive requirement for companies, fortunately there are a range of more cost-effective options on the market today. Engineering teams can now rent space from cloud hosting services like Azure, rather than having to invest in building an entirely new data center.

Third, and most importantly, you need engineers that embrace a new agile way of thinking about solving a problem, a new mindset. In many ways, you’re looking for engineers that have more patience and often perseverance, who are willing to give up some of the control to the algorithm. ML methods require teams to engineer solutions ahead of time, pre-determining key factors (such as obstacle shapes or temperature, if designing an autonomous car); in the world of DL, engineers must design algorithms that can uncover new factors you never thought to look for ahead of time. As an example, the weight of the car, or wind during a storm, or 10 other factors they didn’t think of beforehand.

Or if you think of face recognition as a challenge, in the world of ML engineers have to invest in tasks like edge detection and object segmentation in order to heuristically identify the location of potential faces in the image and pass that information to the learning algorithm. In a DL/AI world, engineers would allow an algorithm to “somehow” look at many streams of faces and discover attributes that might mean something, potentially eye color and others, but likely a lot more.

“In the world of DL, engineers must design algorithms that can uncover new factors you never thought to look for ahead of time.”

This shift in mindset is similar to what happened in the 1980s with the rise of object-oriented programming (OOP), a hierarchical approach (seen in C++ and Java) that was a vast departure from the more procedural methods that came before it (like C or Pascal). If you tried to solve a problem at the magnitude of translation, or an autonomous car, using ML you’d spend a long time trying to think of all the factors in the data that matter, you’d build algorithms to look for those — and you would most likely fail. The rise of DL and advanced patterning is revolutionary in its ability to accelerate the advent of technological solutions we never thought solvable, taking a major step forward in the mindset of how engineers can solve a problem.

The breakthroughs of AI have long felt elusive to common observers. It’s been over 20 years since IBM’s Deep Blue computer dethroned the world chess champion and raised the specter of intelligent AI companions. But recent years have finally ushered in the revolution we’ve been waiting for. Predictive technology is already saving us time, money, and in some cases, actual lives. For example, hospitals are using predictive technology to reduce re-hospitalization by identifying high-risk cases and then allocating extra services such as post-discharge visits in the home. While it’s still early days, and in fact only a very few companies in the world are truly advancing AI, there is undoubtedly a movement afoot.

A revolution is coming — and the future is coming sooner than we think — using DL/AI, not ML, and not science fiction.

This article first appeared in

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: or visit

About Author

Comments are closed.