Artificial Intelligence

From ETHW

The rapid advancement in computing in the 1940s, Norbert Wiener's text "Cybernetics: Or Control and Communication in the Animal and the Machine", Claude Shannon's information theory and Alan Turing's theory of computation, Marvin Minsky's 1951 neural net machine, the SNARC, were a handful of factors that led to the establishment of academic research into artificial intelligence in the 1950s. After a landmark conference at Dartmouth College in 1956, several institutions, including MIT, received grants from the Advanced Research Projects Agency (later known as DARPA) to fund artificial intelligence research.

In 1960, both the Perceptron rule and the LMS algorithm were published, two early rules for training adaptive elements. These initiated several other foundational developments, such as Steinbuch’s Learning Matrix; Widrow’s Madaline Rule I (MRI); and Stark, Okajima and Whipple’s mode-seeking technique. Developments in the 1970s included Fukushima’s Cognitron and Neocognitron models, and Grossberg’s Adaptive Resonance Theory (ART), comprising a number of hypotheses regarding the underlying principles of biological neural systems. The 1980s saw pioneering work on feature maps by Kohonen and Hopfield’s application of the early work of Hebb to form Hopfield models, a class of recurrent networks. By 1990, the field had become vast.

Bernard Widrow, in a special two-art issue on Neural Networks in the September 1990 issue of Proceedings of the IEEE, noted that the main problems neural computing attempts to solve are those that are generally ill-defined and those that require an enormous amount of processing. At the time, digital computers were excellent for performing tasks like solving differential equations with rapid speed, but fell short in performing tasks such as image or video recognition.

The human brain is able to perceive depth, remember the faces and voices of people, and recognize the sights and sounds of objects within a fraction of a second. We are also able to watch television, listen to music, and read books while recalling relevant concepts and information, and even able to cognitively analyze the medium as we are absorbing it. Real-time image and video processing requires an enormous amount of computational processing power, for which the computers of 1990 were largely insufficient. Neural computing aimed to solve these problems as well as gain a greater understanding of how the human brain functions. Biometrics was one of the practical applications of pattern and image recognition made possible by neural computing in the 1990s, which includes fingerprinting or retinal scanning. As computing power increased in the 2000s and 2010s, artificial intelligence tools started to see widespread use in various home applications.

Further Reading