The question, “can a machine be made to think like a person?” has always been tied to strategy games. Games, with their clear rules and obvious winners and losers, were perfect proving grounds for early computer scientists, who could break them down into clearly defined problem sets. Audiences could marvel at the progress of man vs machine face offs, even if they didn’t fully understand the underlying technology.
The earliest AI game systems relied on a brute force, top-down approach: programmers downloaded every possible outcome into their AI systems, which were built around narrowly defined, rule-based criteria. In the 1950s, the earliest versions of neural networks and machine learning arrived. This represented a shift towards bottom-up programming, systems designed to determine the probability of various outcomes based on training data. One early system, a virtual rat solving a maze, was made up of vacuum tubes, motors, and clutches. As the rat navigated, the machine learned and shifted probabilities.
IBM debuted their first AI Grand Challenge — a multi-year effort meant to push the limits of artificial intelligence — in 1997. Deep Blue beat reigning chess world champion Gary Kasparov, and was seen as the final triumph for the top-down approach. In the last decade, advances in machine learning, deep learning, and natural language processing paved the way for AIs like Watson on Jeopardy!, AlphaGo, and Dota 2—increasingly complex systems that still existed within the framework of games.