Artificial Intelligence (AI) seems to be the next big thing. Nowadays, it is impossible to avoid the subject. Either when discussing with clients, when watching the news, or simply when looking at the supercomputers lodged in our pockets. AI is quickly moving into our everyday life and is becoming mainstream. But just how ‘new’ is this latest technology? It is much older than you would imagine. Long before your connected home could order groceries by drone-delivery, humanity was already fantasizing and talking about mechanical and artificial beings, as shown in ancient Greek and Egyptian myths.Let us take you on a journey to discover the evolution and history of artificial intelligence, starting with the first computational machines, calculators!
1900 – 20th century
First form of computational machines, calculators!
In the 17th century, scientists and philosophers like Hobbes, Leibniz, and Descartes were proposing that all human reasoning could be reduced to computations, and therefore be carried on by a machine. This hypothesis is what has been driving AI research to this day. It led to the first forms of computational machines, calculators!
However, it still took more than 200 years for the first calculators to be sold to the general public, further democratizing reasoning automation. Around the same time, Charles Babbage and Ada Lovelace were theorizing the first computers as we know them today.
AI as a threat to our culture
After the industrial revolution finished transforming the world as it was once known, artificially intelligent beings made a comeback in culture, being portrayed as a menace to humans.
Frankenstein’s monster turning back on his creator, Rossum’s Universal Robots rebelling and putting an end to the human race (1920), or even robots being used as war weapons in Master of the World (1934). Isaac Asimov then came to the rescue by proposing its “Three Laws of Robotics“, a spoof of Newton’s Laws of Motion, providing a safeguard to robot takeover.
A wave of optimism
After Alan Turing “broke the code” during World War II, using humanity’s first full-scale computer, a wave of optimism swept through the planet. Machines and computers could not only be useful, they could save lives. In 1950, the same Alan Turing proposed his infamous namesake test seeking to provide a formal benchmark for artificial intelligence.
A couple of years later, Arthur Samuel from IBM, a forefather of machine learning, created a Checkers program capable of learning and improving by itself.