A simple guide to the expansive world of artificial intelligence

AI is everywhere, but it can be hard to define.
A white robotic hand moving a black pawn as the opening move of a chess game played atop a dark wooden table.
Here's what to know about artificial intelligence. VitalikRadko / Depositphotos

Share

When you challenge a computer to play a chess game, interact with a smart assistant, type a question into ChatGPT, or create artwork on DALL-E, you’re interacting with a program that computer scientists would classify as artificial intelligence. 

But defining artificial intelligence can get complicated, especially when other terms like “robotics” and “machine learning” get thrown into the mix. To help you understand how these different fields and terms are related to one another, we’ve put together a quick guide. 

What is a good artificial intelligence definition?

Artificial intelligence is a field of study, much like chemistry or physics, that kicked off in 1956. 

“Artificial intelligence is about the science and engineering of making machines with human-like characteristics in how they see the world, how they move, how they play games, even how they learn,” says Daniela Rus, director of the computer science and artificial intelligence laboratory (CSAIL) at MIT. “Artificial intelligence is made up of many subcomponents, and there are all kinds of algorithms that solve various problems in artificial intelligence.” 

People tend to conflate artificial intelligence with robotics and machine learning, but these are separate, related fields, each with a distinct focus. Generally, you will see machine learning classified under the umbrella of artificial intelligence, but that’s not always true.

“Artificial intelligence is about decision-making for machines. Robotics is about putting computing in motion. And machine learning is about using data to make predictions about what might happen in the future or what the system ought to do,” Rus adds. “AI is a broad field. It’s about making decisions. You can make decisions using learning, or you can make decisions using models.”

AI generators, like ChatGPT and DALL-E, are machine learning programs, but the field of AI covers a lot more than just machine learning, and machine learning is not fully contained in AI. “Machine learning is a subfield of AI. It kind of straddles statistics and the broader field of artificial intelligence,” says Rus.

Complicating the playing field is that non-machine learning algorithms can be used to solve problems in AI. For example, a computer can play the game Tic-Tac-Toe with a non-machine learning algorithm called minimax optimization. “It’s a straight algorithm. You build a decision tree and you start navigating. There is no learning, there is no data in this algorithm,” says Rus. But it’s still a form of AI.

Back in 1997, the Deep Blue algorithm that IBM used to beat Gary Kasparov was AI, but not machine learning, since it didn’t use gameplay data. “The reasoning of the program was handcrafted,” says Rus. “Whereas AlphaGo [a new chess-playing program] used machine learning to craft its rules and its decisions for how to move.”

When robots have to move around in the world, they have to make sense of their surroundings. This is where AI comes in: They have to see where obstacles are, and figure out a plan to go from point A to point B. 

“There are ways in which robots use models like Newtonian mechanics, for instance, to figure how to move, to figure how to not fall, to figure out how to grab an object without dropping it,” says Rus. “If the robot has to plan a path from point A to point B, the robot can look at the geometry of the space and then it can figure out how to draw a line that is not going to bump into any obstacles and follow that line.” That’s an example of a computer making decisions that is not using machine learning, because it is not data-driven.

[Related: How a new AI mastered the tricky game of Stratego]

Or take, for example, teaching a robot to drive a car. In a machine learning-based solution for teaching a robot how to do that task, for instance, the robot could watch how humans steer or go around the bend. It will learn to turn the wheel either a little or a lot based on how shallow the bend is. For comparison, in the non-machine learning solution for learning to drive, the robot would simply look at the geometry of the road, consider the dynamics of the car, and use that to calculate the angle to apply on the wheel to keep the car on the road without veering off. Both are examples of artificial intelligence at work, though.

“In the model-based case, you look at the geometry, you think about the physics, and you compute what the actuation ought to be. In the data-driven [machine learning] case, you look at what the human did, and you remember that, and in the future when you encounter similar situations, you can do what the human did,” Rus says. “But both of these are solutions that get robots to make decisions and move in the world.” 

Can you tell me more about how machine learning works?

“When you do data-driven machine learning that people equate with AI, the situation is very different,” Rus says. “Machine learning uses data in order to figure out the weights and the parameters of a huge network, called the artificial neural network.” 

Machine learning, as its name implies, is the idea of software learning from data, as opposed to software just following rules written by humans. 

“Most machine learning algorithms are at some level just calculating a bunch of statistics,” says Rayid Ghani, professor in the machine learning department at Carnegie Mellon University. Before machine learning, if you wanted a computer to detect an object, you would have to describe it in tedious detail. For example, if you wanted computer vision to identify a stop sign, you’d have to write code that describes the color, shape, and specific features on the face of the sign. 

“What people figured is that it would be exhaustive for people describing it. The main change that happened in machine learning is [that] what people were better at was giving examples of things,” Ghani says. “The code people were writing was not to describe a stop sign, it was to distinguish things in category A versus category B [a stop sign versus a yield sign, for example]. And then the computer figured out the distinctions, which was more efficient.”

Should we worry about artificial intelligence surpassing human intelligence?

The short answer, right now: Nope. 

Today, AI is very narrow in its abilities and is able to do specific things. “AI designed to play very specific games or recognize certain things can only do that. It can’t do something else really well,” says Ghani. “So you have to develop a new system for every task.” 

In one sense, Rus says that research under AI is used to develop tools, but not ones that you can unleash autonomously in the world. ChatGPT, she notes, is impressive, but it’s not always right. “They are the kind of tools that bring insights and suggestions and ideas for people to act on,” she says. “And these insights, suggestions and ideas are not the ultimate answer.” 

Plus, Ghani says that while these systems “seem to be intelligent,” all they’re really doing is looking at patterns. “They’ve just been coded to put things together that have happened together in the past, and put them together in new ways.” A computer will not on its own learn that falling over is bad. It needs to receive feedback from a human programmer telling it that it’s bad. 

[Related: Why artificial intelligence is everywhere now]

And also, machine learning algorithms can be lazy. For example, imagine giving a system images of men, women, and non-binary individuals, and telling it to distinguish between the three. It’s going to find patterns that are different, but not necessarily ones that are meaningful or important. If all the men are wearing one color of clothing, or all the photos of women were taken against the same color backdrop, the colors are going to be the characteristics that these systems pick up on. 

“It’s not intelligent, it’s basically saying ‘you asked me to distinguish between three sets. The laziest way to distinguish was this characteristic,’” Ghani says. Additionally, some systems are “designed to give the majority answer from the internet for a lot of these things. That’s not what we want in the world, to take the majority answer that’s usually racist and sexist.” 

In his view, there still needs to be a lot of work put into customizing the algorithms for specific use cases, making it understandable to humans how the model reaches certain outputs based on the inputs it’s been given, and working to ensure that the input data is fair and accurate. 

What’s the next decade hold for AI?

Computer algorithms are good at taking large amounts of information and synthesizing it, whereas people are good at looking through a few things at a time. Because of this, computers tend to be, understandably, much better at going through a billion documents and figuring out facts or patterns that recur. But humans are able to go into one document, pick up small details, and reason through them. 

“I think one of the things that is overhyped is the autonomy of AI operating by itself in uncontrolled environments where humans are also found,” Ghani says. In very controlled settings—like figuring out the price to charge for food products within a certain range based on an end goal of optimizing profits—AI works really well. However, cooperation with humans remains important, and in the next decades, he predicts that the field will see a lot of advances in systems that are designed to be collaborative. 

Drug discovery research is a good example, he says. Humans are still doing much of the work with lab testing and the computer is simply using machine learning to help them prioritize which experiments to do and which interactions to look at.

“[AI algorithms] can do really extraordinary things much faster than we can. But the way to think about it is that they’re tools that are supposed to augment and enhance how we operate,” says Rus. “And like any other tools, these solutions are not inherently good or bad. They are what we choose to do with them.”

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.