Artificial Intelligence (AI) seems to be the next big thing. Nowadays, it is impossible to avoid the subject. Either when discussing with clients, when watching the news, or simply when looking at the supercomputers lodged in our pockets. AI is quickly moving into our everyday life and is becoming mainstream. But just how ‘new’ is this latest technology? It is much older than you would imagine. Long before your connected home could order groceries by drone-delivery, humanity was already fantasizing and talking about mechanical and artificial beings, as shown in ancient Greek and Egyptian myths. In this video, we’re going to continue our journey through time. Starting in the 1950s. This is part 2 of a series of 3, check part 1 for more information. 

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

 

1950 – early 90’s

The official birth of AI as a field

In 1956, the Dartmouth Conference marked what is commonly known as the official birth of AI as an academic field. As the first high-level computer languages like FORTRAN, LISP, or COBOL were invented, enthusiasm and hope were at an all-time high.

Frank Rosenblatt’s Perceptron and Hubel and Wiesel’s experiments on the visual cortex of cats promised that machines could mimic the human brain. Joseph Weizenbaum’s chatterbot ELIZA (1966) became one of the first programs to successfully pass the Turing Test. At the same time, in Japan, WABOT-1, the world’s first full scale intelligent humanoid robot was born.

Hitting the glass ceiling

Though the overall state of the economy in the 1970s did not help, many of the expectations from the previous years hit the glass ceiling, and hit it hard. Not able to deliver on their promises of superhuman intelligence, AI researchers saw their funding virtually completely cut, in what will be remembered as the first of two AI “winters”.

In the 1970s, though it lost popularity and sheer financial support, AI gained something arguably much more valuable. More focus was spent on improving the programming languages that had become too rigid for their own good. This is when languages like C, Prolog, SQL, and Pascal were born. The fact that these languages are not only still alive today, but are the cornerstone of modern day programming speaks for itself. Winter had come, but Spring was going to last for a long time.

AI as a cost-saving technology

In the 1980s, researchers reviewed their ambitions and figured that instead of an all-knowing AI, they could build very efficient expert systems. Systems that could perform incredibly well in specific fields like scheduling, stock management, or order processing. This provided humongous savings for the world’s largest corporations. AI was not a theoretical utopia for a lone group of researchers, it was a cost-saving technology prized by the business world. And money followed.

Companies started to focus on funding and creating infrastructures capable of maintaining these systems. Better machines were built, languages like C++, Perl, or even MATLAB that allowed for large-scale implementation and maintainability made their debuts, and new techniques capable of using data and logic to automate processes were conceived.

The second AI war

The rise in enthusiasm did not last long. Corporations were spending large sums of money on AI research and on machines specifically built for these purposes, and it was not worth it anymore. The programs showed their limitations of being too specific and difficult to improve, and the machines were outdone in speed and power by Apple and IBM’s newest desktop computers.

Once again, goals that were set at the beginning of the decade were not met, and a sudden disbelief in the field as a whole took over. It was time for the second AI winter.

Read part 3