In the eighth order of the series, we will learn about Jeffrey Hinton, who revived artificial intelligence that was once turned off and made today's deep learning possible. If you haven't checked out the past content of the series yet, I recommend reading it first.
<퍼센트론즈>In 1969, Marvin Minsky*, the representative of symbolic artificial intelligence, published a critique of Rosenblat**'s “Perceptron” ***. Since then, artificial neural networks and artificial intelligence research have had their first long and grueling winter. **** Geoffrey EveresThinton (Geoffrey EveresThinton, 1947 ~) is the protagonist who revived the flame of artificial intelligence research that had been extinguished and made deep learning what it is today.
An octopus that sang the love of a scholar's family
Jeffrey Hinton is currently in Canada, but he was originally born in Wimbledon, England. He came from a family of famous scholars and was commonly referred to as the mother's son. Great people such as Colin Clark, who pioneered the use of gross national product (GNP); George Boole, the founder of symbolic logic who created Boolean algebra; James Hinton, a doctor and writer; and George Everist, the origin of Mount Everest, are his ancestors. *****
Hinton became interested in the brain when she heard from a friend in high school that studying the rat brain was interesting. After that, I went to King's College and majored in physiology and physics, then got a master's degree in philosophy and psychology. Then, in order to study the working principles of the brain in earnest, I visited Professor Higgins at the University of Edinburgh (who founded cognitive science) and received a doctorate degree in artificial neural networks. ******
Then I left the UK, where neural network research was not active at the time, and headed to the US to continue my research activities in a freer environment.
The beginning of full-scale neural network research
While studying in the US, Hinton met with masters such as John Hopfield and David Rummelhart. Rummelhart is a master of cognitive science, and Hopfield is a person who opened new horizons in neural network research with Hopfield Networks. Meeting with these masters gives impetus to Hinton's research on neural networks.
In 1984, Hinton proposed a Boltzmann machine with Hopfield's apprentice Terry Seinofsky. It is a powerful computing device that uses large-scale parallel processing by improving the existing Hopfield network by combining neural network algorithms. Boltzmann machines are neural network networks that circulate probabilistically and can learn based on internal structures and solve various combinations of problems. *******
In 1986, he participated in Rummelhart's BackPropagation Algorithm (BackPropagation Algorithm) paper. It's a monumental paper that ended a long winter of neural networks and artificial intelligence research. Backpropagation knowledge and multi-layer perceptrons are solutions to existing perceptron problems where XOR problems cannot be solved. ********
The resurgence of neural networks and the rise of deep learning
Artificial neural network research in the 1990s had to go through a difficult period due to limitations such as loss of slope. **** Support and investment in research were cut off, and the second winter of artificial intelligence, where many researchers left, came. However, Hinton does not give up and continues to research neural networks.
Hinton, who endured difficult times and continued his research, published a paper called 'A Fast Learning Algorithm for Deep Belief Nets' in 2006. This paper overcame the limitations of existing neural networks and opened an era of full-scale deep learning with a new algorithm called Deep BeliefNetwork (DBN). DBN solves the problem of loss of slope through prior learning in the form of multiple layers of limited Boltzmann machines, and also solves the problem of not being able to process new data well by using a method of deliberately omitting data during learning. *********
Opening the heyday of deep learning
Jan Rechun and Joshua Bengio, who received Hinton's guidance in 1989, completed CNN (Convolution Neural Network). Deep Neural Networks (Deep Neural Networks), which were implemented by combining existing Boltzmann machines with backpropagation algorithms, opened the heyday of deep learning. AlexNet, which applied this, overwhelmingly won the ILSVRC (ImageNet Large Scale Visual Recognition Challenge) in 2012, and deep learning algorithms based on Deep Architecture (Deep Architecture) have since become mainstream. *********
Meanwhile, it was still a time of intense military expansion during the Cold War. But Hinton didn't want his research to be used for military purposes. As a result, he moved to the University of Toronto, Canada, and is still actively engaged in research activities as a Google Scholar.
While finishing
Hinton's interest in how the brain works is said to have been triggered by a conversation she had with a high school friend. A friend told Hinton, “The brain works like a hologram.” Just as a hologram creates shapes by reflecting countless amounts of laser light, the brain also spreads memories in a huge neural network without putting them in one place. **********
For a while, the semiotism of programming a large number of rules to make computers make inferences became mainstream in artificial intelligence research. However, as you know, it was eventually revealed that Hinton was right when he believed that AI should let itself learn knowledge. Hinton's role was decisive in enabling neural networks and deep learning to blossom brilliantly through the long and harsh winter of AI.
Predicting the future in the rapidly changing field of artificial intelligence is very difficult. However, it is clear that AI has already overtaken humans in various fields and will continue to surpass humans. It will be interesting to see how far and when Hinton and his successors will advance artificial intelligence in the future.
References
[1] https://en.wikipedia.org/wiki/Geoffrey_Hinton
[2] http://wiki.hash.kr/index.php/제프리_힌튼
[3] Jeffrey Hinton, the father of deep learning who overcame everyone's ignorance http://www.techm.kr/news/articleView.html?idxno=4406
[4] [Korea's first exclusive interview] Jeffrey Hinton, a professor at the University of Toronto, Canada, the godfather of artificial intelligence in the 21st century https://www.joongang.co.kr/article/20382230#home
[5] The limitations of ML, now pay attention to GLOM (GLOM)! ... Jeffrey Hinton's new challenge http://www.aitimes.com/news/articleView.html?idxno=138348
[6] AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/amp/
[7] A New Way for Machines to See, Taking Shape inToronto https://www.nytimes.com/2017/11/28/technology/artificial-intelligence-research-toronto.html
[8] [Video] Meet the Godfather of AI https://www.bnnbloomberg.ca/video/meet-the-godfather-of-ai~1404487
[9] “Don't throw away old ideas. Take it out again” https://contents.premium.naver.com/themiilk/business/contents/210528214914254NM
Good content to watch together
[AI Story] Key Figures of Artificial Intelligence (1) Alan Turing[AI Story] The decisive figures of artificial intelligence (2) Walter Fitz, the founder of deep learning[AI Story] The decisive figures of artificial intelligence (3) John McCarthy, the founder of artificial intelligence[AI Story] Key Figures of Artificial Intelligence (4) Rosenblatt, pioneer of deep learning in artificial intelligence[AI Story] The decisive figures of artificial intelligence (5) A fantastic combination, Simon and Newel[AI Story] Key Figures of Artificial Intelligence (6) Humans are thinking machines, Marvin Minsky[AI Story] The decisive figures of artificial intelligence (7) Singularity is coming, Ray Kurzweil