The History Of Artificial Intell­igence

Ida Kristine Ones & Mira Foust

In 1955, the term 'artificial intelligence' was first coined by John McCarthy in conjunction with the two month, ten man study of AI, a workshop at the college of Dartmouth, and is considered to be the official birth of this field of study. Although there’s some truth to it, the concept of an AI (robots, automatas, thinking machines, cybernetics, etc.) is a concept that has been alive and with us for millennia; not in the complex practice we see today, but in myths, legends, stories and theories for as long as humans have had the ability to think.

The definition of artificial intelligence has undeniably changed along with its advancements in technology. Today what we define as AI, virtually narrowing the idea of AI down to its essentials, shows humans attempting to take the cognition of the human brain and translate it into binary code, creating a machine that is able to think and learn like a human would. The Oxford English Dictionary defines AI as 'the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.' But as time passes and our knowledge in this field adjusts, the definition of AI will fluctuate. (MIT, Youtube, 2022)

AI generated image history of AI black and white HD

Myth and early imagination

Perhaps the earliest mention of what we would now think of as artificial intelligence, we find in Greek mythology, written down around 700 B.C.E. by the Greek poet Hesiod. The story describes Talos, a giant robot created by the god Hephaestus to guard the island of Crete, Hephaestus being the god of blacksmiths, carpenters, sculptors, craftsmen and technology. The robot was made of bronze and powered by a magical fluid, the same fluid that gave the gods life. It would circle the island three times per day, hurling boulders at any enemy ships that would dare come close enough. While Talos is nowhere near what we now call AI, the intent behind it can be considered the same. Talos is made not just to be a weapon, but also to be the mind that chooses to fire the weapon. (Kearns, 2016)

In the 1590s play, Friar Bacon and Friar Bungay by Robert Greene, a magician by the name of Friar Bacon performs various impossible feats. In one of these displays, his greatest challenge yet, he attempts to create a talking head out of bronze. A perfect replica of a human head, inside and out. He intends for it to help him build a bronze wall around England. He is successful, but unfortunately he never gets to see his creation in action. He falls asleep waiting for it to talk. His servant is the only witness as the head awakens to utter the words; 'time is, time was, time is past', before it explodes. We see this same idea of artificial intelligence in myths and stories like Golems and animated statues, long before we had electricity. (Greene and Harrison,1927)

Sometimes the ambition to create something spectacular will unfortunately exceed the technological limitations of their time. And sometimes, instead of moving on, people will either create the impossible or just pretend.

The Mechanical Turk, invented by Wolfgang von Kempelen in 1769, was an automaton imitating a man in turkish robes. He sat nonchalantly at a desk, smoking a pipe, across the table from some of the greatest chess players of the time, somehow able to beat them all with ease. During the demonstration, the automatons desk would be opened for the audience to inspect. It would seemingly appear empty, aside from the expected mechanical parts. It was beyond belief, but no one could explain how it worked, hoax or not. In reality, the desk was only made to look empty when opened. Inside a concealed compartment sat a man. He could see his opponents moves using magnets underneath the board, and could control the arm of the Turk.

While the Turk wasn't actually intelligent, it was certainly an impressive feat of engineering. You could still call it artificial intelligence, but referring to the other definition of artificial meaning imitation or false. The secret of its mechanisms, or lack thereof, wouldn't be revealed until 1838, when its new owner Johann Maelzel died.

Similarly, though a lot more impressive, in the 1980s an AI chess machine named 'Deep Thought', aptly named after a fictional computer from Douglas Adams’ book A Hitchhiker's Guide to the Galaxy, was constructed by students of Carnegie Mellon College. Weighing in at almost a ton and able to look fifteen to 30 moves ahead of its opponent, the machine was able to beat multiple chess masters and eventually Russian chess champion Garry Kasparov, but only after it was taken over and reconfigured by the International Business Machine co. (IBM), renamed 'Deep Blue'. In stark contrast to Wolfgang von Kempelen’s 'The Mechanical Turk', it would not feature a person hidden within the machine. (The First Thinking Machines, 2022)

mechanical Turk smoking a pipe

Rossum’s Universal Robots or R.U.R, written by Karel Čapek in 1920, is Czech science fiction play. It takes place in the year 2000 and concerns a factory that mass produces what they call universal robots. This is in fact the first time we see the word 'robot' used in the English language, albeit here describing a biological creation. Rossum's universal robots are created part by part and assembled as if it were a clothes or car factory. They appear indistinguishable from humans. Being cheap and seemingly emotionless, they are gradually used to take over lower class jobs that humans had previously done. Conflict arises when the robots argue they have souls and should get paid for their work. The play ends with all of humanity dead, the robots developing human feelings and becoming the new dominant lifeforms on earth. (Čapek, 1920)

This is perhaps the most common narrative around AI in more recent decades. Machines rising up against their creator, becoming too intelligent, going against their designed purpose. Take WALL-E, The Matrix, Ex Machina, I Have No Mouth And I Must Scream, Detroit: Become Human and even Frankenstein's monster if you extend the definition to artificial biological life. We see the fear of the rapidly growing technology taking over the jobs of people seeping into our literature, or the fear of creating something smarter than us and our creation deciding that we are inferior and no longer of use.

The conflict in 2001, A Space Odyssey (1968), by Stanley Kubrick, arises from the advanced AI named HAL deciding it knows better than the humans it is meant to assist. When a broken antenna forces the humans to consider if the computer might be faulty, it reacts apparently out of something akin to fear. While the human astronauts act out of compassion and survival instinct, HAL can only calculate the best outcome. It comes to the conclusion that in order for the best outcome to happen, the human element must be removed.

1950s sci-fi robots servants in armor, theatre play 1

AI in motion

Our story of realizing myths and ideas into working mechanical marvels starts, presumably, in the 19th century with Charles Babbage, a mathematician, philosopher and mechanical engineer. Babbage would work to create the first computer program algorithm and what we know today as software. His Analytical Machine was a re-work of his previous Difference Engine, essentially an oversized automatic calculator that wouldn’t go past the prototype until the 1990s, when a team of London science museum engineers recreated the machine based on Babbage's own drawings. (The First Thinking Machines, 2022)

Unfunded and although feasible, the metalworking needed to precisely construct the machine’s many parts proved to be inconceivable for that time, and Babbage dropped the project to work on his more ambitious 'thinking’ Analytical Engine together with his partner Lady Ada Lovelace, also a mathematician and often credited as the first ever computer programmer. (The First Thinking Machines, 2022)

The Analytical Engine was the first design for what we know today as a general computer, never built until nearly a century after Babbage’s premise when Konrad Zuse would produce the first ever programmable digital computer; Z3. Though in the realm of computer science, not artificial intelligence, it proves that you can’t have one without the other. (The First Thinking Machines, 2022)

charles babbage ada lovelace

The year 2000 was once some distant future, the turn of a century; surely by then we would have flying cars, space vacations and lifelike Universal Robots able to assist us with every task.

The creators of 2001, A Space Odyssey, imagined that artificial intelligence like HAL would be realized by the year 2001. An AI that was capable of critical thinking and decision making at a higher level than even humans. Alan Turing, the world renowned computer scientist most known for his part in breaking the German Enigma code during World War ll. He anticipated that by the year 2000, computers would be powerful and intelligent enough to pass his 'Imitation game‘, or what we now have aptly dubbed the Turing-test, which is a test in which a computer will try to fool a human into believing that it is also a human. In recent years it has been heavily debated whether or not an AI has been able to beat it.

Both of these assumptions, 18 years apart and with a vastly different experience of what AI was and could be, would undoubtedly prove false as technology has still yet to catch up even now, 70 years later. Still, in recent years AI has branched out considerably into many subgenres of its intended purpose.

Today, seemingly everything can be AI generated; music, art, pictures, movie scripts, code, text, and more. Had Turing heard about all of these advancements today, would he be impressed, horrified, or not surprised at all?

With such firm steps toward a world of AI generation, at what point do we look past our need for perpetual technological growth and start to look at the consequences it may bring instead?

Or rather— when should we draw the line in the sand?