Between the 1930s and now, artificial intelligence (AI) technology has gone from fantastical science fiction to reality.
The concept of artificial intelligence began with 1930s science fiction in the cinema. The Wizard of Oz featured the “heartless” Tin Man, while Metropolis featured a humanoid robot that impersonated the character of Maria. Through these movies, the idea of AI was planted in the next generation’s minds. One of these movie watchers was Alan Turing, who authored the paper "Computing Machinery and Intelligence" in 1950. However, Turing’s theories could not yet be put to work due to the inability of computers to store commands and the exorbitant cost of computing.
Some progress was finally made when Allen Newell, Cliff Shaw, and Herbert Simon created a proof-of-concept called the Logic Theorist, which was a program that mimicked the problem-solving skills of a human. John McCarthy and Marvin Minsky hosted a presentation of the Logic Theorist at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. While the conference did not succeed in starting an organized effort to develop AI technology, it still resulted in the consensus that AI was achievable, catalyzing the next two decades of research in the field.
Between 1957 and 1974, AI advanced at rocket speed. Computer costs went down while their efficiencies went up, machine learning algorithms were developed and improved, and programs such as General Problem Solver and ELIZA made huge initial steps toward the goals of machine problem solving and machine interpretation of human language. Government agencies such as DARPA were incentivized to fund AI research, and the government had high expectations for a machine translation agent to be developed.
Despite the numerous advances that occurred, obstacles inevitably sprang up. First, computers did not have enough memory to perform well on artificial intelligence tasks. Any machine learning problem requires that a computer learn from copious amounts of data, and computers at the time did not have the capacity to store all that data. As expectations and funding decreased, research slowed for the next ten years.
In the 1980s, more funding and more algorithms led to a resurgence in AI research. Researchers such as John Hopfield and David Rumelhart developed and popularized deep learning techniques that allowed computers to learn from experience, and Edward Feigenbaum introduced expert systems that acted as consultants in specialized domains. While the Japanese government-funded Fifth Generation Computer Project expert system was a flop, AI continued to thrive. In 1997, world chess champion Gary Kasparov was defeated by IBM’s Deep Blue machine. Now that we live in the age of big data, innovations such as emotion detection tools, driverless cars, and conversational expert systems may soon become ubiquitous.
Food for thought: what are some ethical implications of emerging AI technology? Should the AI data learning process be allowed to breach user privacy? How can data be improved to combat algorithmic bias? What do you expect AI to accomplish in the future?
Sources:
http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf
We started the process to make everyday life easier for human beings and who can say where it takes us but we know for a fact that we will continue to innovate and use artificial intelligence forever. I expect the same as everyone else does with flying cars and the best medical procedures ever as in stopping cancer and healing bullet wounds and I can't imagine the US to not get there or not be the first one to get there
ReplyDelete