Humans have long wanted to create beings similar to themselves. You can see things like golems in ancient mythology, Homunculus the medieval alchemist, and androids in sci-fi movies. However, even today, when science and technology have advanced, it is nothing more than a long-standing dream that has not yet been completed.
The purpose of artificial intelligence is also similar to this long-standing human dream.
Wikipedia defines artificial intelligence as follows: “It is a triad of computer science that aims to artificially implement human learning ability, reasoning ability, and perception ability...” Other than this, although it varies slightly depending on the source, in summary, artificial intelligence Realizing human-like intelligence with machinesThe purpose is to do it.
The history of artificial intelligence research like this is longer than expected. As it is an advanced technology, it seems to have appeared recently, but in fact, research began even before computers were developed. It's true that we haven't produced as many results as initially imagined, but now it's developing at such a rapid pace that it's hard to predict the future.
Starting with this post, I would like to look back at the decisive moments from the advent of artificial intelligence to today, and also think about the future of working with artificial intelligence.
1943, The Origins of Deep Learning
In 1943, Walter Pitts (Walter Pitts) and Warren McCullonch (Warren McCullonch) published the paper 'A Logical Calculus of Ideas Impossible in Nervous Activity'*. This was the advent of the Artificial Neural Networks (Artificial Neural Networks) model, which is the origin of deep learning. The two analyzed networks caused by artificial neural networks and showed how they perform simple logical functions. **
The following is the first section in the summary section of their paper.
Because of the “all-or-nothing (all-or-nothing)” nature of neural activity, the work of the nervous system and the relationships between them are treated as propositionallogic (Propositionallogic). The behavior of any network can be described from this perspective... For a logical expression that aims to satisfy certain conditions, we can find a network that acts in the way it describes. ***
Their research is based on the idea that the actual brain is an electrical network made up of neurons. Through the paper above, I explained the function of a neuron with a binary logic model consisting of 0 and 1. This was the first logical model of the action of human cranial nerves, and it evolved into today's deep learning research.
1950, Turing's thought machine proposal
In 1950, Alan Turing, known as the father of computer science, published a paper called 'Computing Machinery and Intelligence'. It was a paper that made a revolutionary contribution to the history of artificial intelligence containing an analysis of the possibility of implementing thinking machines. *** Here”Can machines think?Along with the famous question”, it presents the possibilities of intelligent machines.
Also, through this paper, I propose a famous experiment called the Turing Test**** today. It means that it is possible to determine the intelligence of a machine through tests to see how similar it is to a human. The idea is that if a machine's response is indistinguishable from that of a human being, then a machine can think like a human.
As is well known, Alan Turing and his research had a huge impact on artificial intelligence today.
1956, the advent of artificial intelligence
The term artificial intelligence first appeared at the Dartmouth (Dartmouth) conference hosted by John McCarthy (John McCarthy) in 1956. ArtificialIntelligence (AI) can actually be said to be the moment artificial intelligence was born, with terms, targets, and important people appearing. After this conference, artificial intelligence will begin to be actively researched in earnest.
Scientists gathered at this conference discussed ways to refine Turing's “thinking machine” and implement it as a system with logic and form. They also showed a high level of trust and confidence in the possibilities of artificial intelligence. Actually Computers will become chess world champions, compose aesthetically valuable music,The prophecies from the conference, such as *****, have come true.
1958, the birth of Perceptron
In 1958, Frank Rosenblatt (Frank RosenBlatt) devised the Perceptron (Perceptron). The idea was to let computers make inferences by learning with neural networks, just like a human brain made up of many neural networks. 'He created the learning principle that occurs in the human brain by adding the concept of “weight (weight).” Artificial neural networks with logical computation in mind are now able to utilize artificial intelligence in more diverse aspects as they meet image recognition. '******
Perceptron, which appeared in this way, can also distinguish men and women from photographs. This raised expectations that the age of artificial intelligence would soon arrive. At the time, the New York Times also published an article on this, which aroused the world's interest.
So far, I've taken a brief look at the period that can be seen as the beginning of artificial intelligence. The surprise people felt at the time was enough to make them have optimistic expectations for artificial intelligence. However, in the 1970s, perceptron hit its limits, and artificial intelligence research faced a difficult period.
Next, we'll continue to follow the story of this period called the first winter of artificial intelligence and the footsteps since then.
References
[1] https://ko.wikipedia.org/wiki/인공지능#역사
[2] https://terms.naver.com/entry.naver?docId=1691762&cid=42171&categoryId=42187
[3] Research on the history, classification, and development direction of artificial intelligence — Cho Minho http://koreascience.or.kr/article/JAKO202113254541050.pdf
[4] http://www.aistudy.com/history/history.htm
[5] Perceptron: The Beginning of Artificial Intelligence https://horizon.kias.re.kr/17443/
Good content to watch together
[AI Story] Key Figures of Artificial Intelligence (1) Alan Turing