In the second order of the series, we continue to look back at the development history of artificial intelligence after percent theory. If you haven't watched past content in the series yet, I recommend checking it out first.
[AI Story] Crucial Moments of Artificial Intelligence 1
1969, the XOR problem and the first winter of AI
In 1969, Marvin Minsky and Seymour Peppert presented the problems and limitations of perceptron as mathematical proof. Perceptron means that problems that can be linearly separated, such as AND or OR, are possible, but it cannot be applied to XOR problems* where data cannot be distinguished in a linear (linear) way. In fact, this proved the limitation** of perceptron**, but at the time, it was accepted as revealing the limitations of artificial neural networks themselves.
After that, a dark period called the winter of artificial intelligence arrived. Support for AI research was cut off, and skeptical views about the possibilities of AI spread. “Artificial intelligence cannot handle explosive combinatorial explosions (intractability).” *** had a pessimistic outlook, large-scale funding was cut off, and many research projects were cancelled.
1986, solving multi-layer perceptron problems and reviving AI
In 1986, Jeffrey Hinton experimentally proved multi-layer perceptrons (multi-layer perceptrons) **** and back-propagation (back-propagation) ***** algorithms. Through this, we solved the XOR problem that brought about the winter of AI. Artificial neural networks and AI research have been revived after a long period of darkness.
Traditional percent theory couldn't solve the XOR problem. But multi-layer perceptrons and backpropagation algorithms were the solution. Multilayer percent theory was able to solve the XOR problem by adding an intermediate layer called a hidden layer, and the backpropagation algorithm made multilayer neural networks possible by optimizing weight values by sending errors back to the back after the feed forward operation.
However, it was not an innovation that Hinton alone suddenly achieved. It was the result of the accumulated efforts of researchers who have continued their research even in difficult times. Backpropagation research had already begun in the 1960s, but it was buried in a stagnant atmosphere*****, and this was brought back to light by Hinton.
As a result of this, AI research has been revitalized for a while, and significant progress has been achieved.
The 1990s, slope loss problem and AI's second winter
Artificial neural networks have attracted attention again with multilayer perceptrons and backpropagation methods. However, the scope of application to neural networks was limited by this alone. To handle large and complex data, it is necessary to connect multiple hidden layers, and only multi-layer percentile and backpropagation methods are beginning to show limitations.
Next, the VanishingGradient Problem (VanishingGradient Problem) ****** occurred. This was the defining issue that brought about the second winter of AI. As the number of layers in an artificial neural network increases, the weight of the input layer, which plays an important role in learning, is not properly adjusted.
Eventually, a difficult time came again due to various limitations, such as limitations on the use of multi-layer neural networks and computer performance at a time when complex computational processing was difficult. AI research has entered its second dark period. Once again, research support has begun to be drastically reduced, and related industries have also entered a period of stagnation.
This time, I looked back at the twists and turns of artificial intelligence from the first winter to the second winter. Next, I will introduce the story of artificial intelligence, which has continued to evolve since the advent of deep learning and reached its heyday recently.
** See the following article for details. “Perceptron: The Beginning of Artificial Intelligence” https://horizon.kias.re.kr/17443/
****** For more details, please refer to the following video. “TensorFlow Deep Learning Lecture 12-1 - Gradient Loss Problem in Artificial Neural Networks” https://www.youtube.com/watch?v=BwkquF9QQLU&t=166s
References
[1] https://ko.wikipedia.org/wiki/인공지능#역사
[2] https://terms.naver.com/entry.naver?docId=1691762&cid=42171&categoryId=42187
[3] Research on the history, classification, and development direction of artificial intelligence — Cho Minho http://koreascience.or.kr/article/JAKO202113254541050.pdf
[4] http://www.aistudy.com/history/history.htm
[5] Perceptron: The Beginning of Artificial Intelligence https://horizon.kias.re.kr/17443/
[6] How has artificial intelligence developed, and the history of artificial intelligence https://www.samsungsds.com/kr/insights/091517_CX_CVP3.html
[7] [AI Planning ②] The beginning and development of artificial intelligence... the dark age and 'AI winter' http://scimonitors.com/ai기획②-인공지능-발달과정-튜링부터-구글-알파고-ibm/
[8] Introduction and development trends of artificial neural networks https://www.koreascience.or.kr/article/JAKO201724655833983.pdf
Good content to watch together
[AI Story] Crucial Moments of Artificial Intelligence 1