What Is Machine Learning and Why Does It Matter?

It’s now a part of our everyday lives — but how does it work?

Machine-Learning-Gear-Patrol-Lead-Full

Welcome to “What Is,” a new column on dedicated to giving simple answers to complex things that affect the products around you. Got a question? Nothing’s off limits. Submit yours to [email protected] and we’ll do our best to get them answered.

Why Machine Learning Matters

Every day we hear buzzwords like artificial intelligence and machine learning at some point or another in the news cycle, typically associated with smartphones, smart speakers, drones, and so on. It’s easy to take for granted that these are technologies that make our lives easier and more convenient, undergirding everything from facial recognition to autocomplete on Google. Even Spotify uses machine learning for music recommendations. Yet more often than not it runs behind the scenes, without our knowledge or understanding. After hearing about these algorithms and systems for so long, it’s important to take a step back and ask yourself what this stuff is, why it matters, and why we should care in the first place.

Examples of Machine Learning in Everyday Things
• Photo recognition: Apple Photos, Google Photos
• Virtual assistants: Siri, Alexa
• Drone software: DJI Mavic
• Music: Spotify

Machine learning (ML) is an incredibly diverse field, the root of which is artificial intelligence (AI). Focusing through the lens of consumer products makes it a little easier to understand. In an interview with Wired UK, Nidhi Chappell, head of machine learning at Intel, explained that “AI is basically the intelligence – how we make machines intelligent, while machine learning is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter.”

How Machine Learning Works

Oxford University breaks down Machine Learning with a simple analogy.

As outlined in the video above, let’s say you want to teach a computer to automatically pick out pictures of cats and dogs. To do this, you would assign a tag to each picture, indicating which are cats and which are dogs, then tell the computer to separate a set of pictures containing both animals. The computer’s algorithm learns how to classify the pictures based not on tags, but based on patterns it observes; once the algorithm correctly identifies the set of pictures, it is ready to take on new data and can start identifying pictures it hasn’t seen before based on its learnings. This is called supervised Machine Learning, and it’s the basis for features in everyday apps like search functionality in Google Photos or Apple Photos, wherein ML can differentiate locations, people, and time of day without any written information.

Machine-Learning-Decision-Tree

The machine learning “decision tree”. Animation: R2D3

Beyond supervised ML, the intelligence can be broken down further into unsupervised machine learning” and reinforcement learning. In unsupervised ML, the data has no tags, but the algorithm still splits the pictures into two piles based on their characteristics, often using a decision tree. To improve the algorithm over time, reinforcement learning adjusts the algorithm if and when it makes mistakes; the concept riffs on the findings of behaviorist psychology. Algorithms that can correct themselves and continue to learn are the lifeblood of traffic-sensitive Google Maps routes, increasingly individualized Netflix suggestions and natural language processing with virtual assistant commands.

Further Reading

Of course, this is just scratching the surface. To dive deeper, check out “The Great A.I. Awakening” by Gideon Lewis-Kraus over at The New York Times, which offers a deep dive on AI based on its implementation in Google Translate. Over on Wired, How Google is Remaking Itself as a ‘Machine Learning First’ Company‘ by Steven Levy provides perspective on how artificial intelligence will play a future role in products and the internet. And when all else fails, go down the Wikipedia rabbithole.

The company’s most coherent vision yet for future computing gelled at its developer’s conference.