Skip to Main Content

A Brief History of AI

It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.
An image of workers building an AI robot brain
Credit: Image: Lifehacker/Rene Ramos

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

You wouldn’t be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.

AI’s conceptual beginnings

Of course, people have been wondering if we could make machines that think for as long as we’ve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germany’s “unbreakable” code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer could—and would—become, imagining it as “one machine for all possible tasks.”

But it was what Turing wrote in “Computing Machinery and Intelligence” that changed things forever: The computer scientist posed the question, “Can machines think?” but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called “The Imitation Game.” Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogator’s goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, it’s not such a difficult task. But if one or both decides to lie, it becomes much more challenging.

But the point of the Imitation Game isn’t to test a human’s deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?

Kick-starting the idea of neural networks

Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technology’s practice, even if he never saw it come to fruition. Rosenblatt created the “Perceptron,” a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about something—say, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it “learns” the right answers instead of having to predict them.

That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the “weight” of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully “learned.”

Unfortunately, despite some promising attempts, the Perceptron simply couldn’t follow through on Rosenblatt’s theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasn’t wrong: His machine was just too simple. The perceptron’s neural network had only one layer, which isn’t enough to enable machine learning on any meaningful level.

Many layers makes machine learning work

That’s what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.

Through the 1990s and 2000s, researchers would slowly prove neural networks’ potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers weren’t powerful enough to handle the amount of data necessary to see AI’s full potential. Moore’s Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.

And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and we’ve been on a wild ride ever since. We don’t know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.

Living with AI

To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.