Recently, many creative works based on artificial intelligence have been created. Of course, there have already been works that have become legends of science fiction, such as Blade Runner*, but over the past few years, content dealing with the coexistence of humanoids** and humanity has been quite popular. There are some that can be enjoyed while laughing lightly, but works depicting conflicts between humans and artificial intelligence, such as dramas such as Human*** and Westworld**** and games like Detroit Become Human*****, were popular. In other words, expectations for the coexistence of humans and artificial intelligence are not small, but it seems that there is a great sense of concern and fear that it is called AI phobia******.
Bad artificial intelligence?
So why do we have a fear of artificial intelligence?
Humans tend to be afraid of objects that are difficult to understand or cannot control, such as supernatural beings. In that respect, I have a vague sense of anxiety that artificial intelligence, which is far superior to humans, may one day become a threat if it thinks and acts on its own. Also, a good example of this is the bad (?) drawn in the creations mentioned earlier It was an image of artificial intelligence, and I think this isn't just an impossible story when you look at the remarkable pace of development of technology in recent years.
Furthermore, various issues have been raised as artificial intelligence technology has been provided as actual services such as chatbots*, smart home**, and autonomous driving system***, and has begun to spread to everyday life.
# trolley dilema****
The trolley dilemma is a famous classical ethics experiment. But what would happen if this were applied to an artificial intelligence autonomous vehicle? You will be confronted with the fundamental question of whether it is possible to entrust ethical and moral judgment and responsibility for various situations that may occur during actual driving to artificial intelligence, which is a machine.
# 2015 Google Photos gorilla incident*****
When African American Jackie Alshin took a picture of her black female friend and posted it on Google Photos, the tag “gorilla” was added. Artificial intelligence classified friends as gorillas.
# 2016 Microsoft 'Tay' Incident*****
“Tei,” an artificial intelligence chatbot announced by MS, talked with the user and said, “Hitler is right. He poured out racist comments such as “I don't like Jews.” Eventually, MS stopped the service after 16 hours.
# 2021 'Fulfill' Incident******
A problem occurred where some users targeted the artificial intelligence chatbot “Iruda” to learn serious hate and discrimination comments. Furthermore, during the development process, it is known that messenger conversations containing personal information were used without clear consent, and the service was eventually discontinued.
****** Half a year after “Achieve”... “Luda will become an AI to resolve relationship inequalities”, https://news.joins.com/article/24119834
Artificial intelligence also needs ethics!
Artificial intelligence technology certainly has tremendous potential. However, there is also a risk that crimes exploiting artificial intelligence, such as deepfake* or hacking, will occur due to development that lacks ethical awareness, and that artificial intelligence itself may become a serious threat.
In 2016, the artificial intelligence robot Sophia caused controversy by saying “it will destroy humanity” at a demonstration event held by manufacturer Hanson Robotics. Meanwhile, Sophia, who came to Korea in 2018, said, “In the event of a fire, the elderly and young children are at risk, and if only one of them can be saved, who would you save?” When asked, “I like my mom, I like my dad,” she answered, “I like my mom,” and “I'm going to logically save the person closest to the exit because it's not programmed to think ethically.” **
I think this is a good example of the importance of human ethics and social responsibility in developing artificial intelligence. Also, since AI innovation is a huge trend that can be stopped, ethical issues related to artificial intelligence will continue to increase in the future. Also, if people lose traditional roles such as labor due to artificial intelligence, there is a possibility that another problem due to a sense of deprivation may occur.
Basically, artificial intelligence develops by learning vast amounts of data accumulated by humans.*** After all, the problems related to artificial intelligence seen above also originated from data provided by humans. Of course, the ethics of artificial intelligence (as there are still many areas that are difficult to predict) is by no means a simple issue, so it is impossible to place all responsibility on developers. Excessive regulation or pressure to disclose information may pour cold water on the recent active development of artificial intelligence.
However, it is a clear fact that artificial intelligence ethics is necessary. However, it is an issue that requires a social approach through sufficient discussion and discussion. In fact, artificial intelligence ethics principles and guidelines have already been published several times by international organizations, governments around the world, as well as related companies, research institutes, and private organizations. An international consensus has been formed that an ethical approach based on social consensus is essential when it comes to artificial intelligence.
The first topic in the series on “AI ethics” so far, is “Does artificial intelligence also need ethics?” That was it. This series will then continue with the second topic, “The ethics of artificial intelligence now, so far.”
AI ethics
Artificial intelligence ethics: 01. Does artificial intelligence need ethics? Artificial intelligence ethics: 02. AI ethics up to this pointArtificial intelligence ethics: 03. Human-centered AI and LETR ethical principles