Human-like AI is the next big step in the development process of Artificial Intelligence. The question now is: Do we really want that to happen? Do we really need human-like AI?
With the astonishingly fast development of AI, new ethical questions and doubts began to raise as well. Is it morally right to give machines human-like intelligence? Isn’t intelligence something that defines us as human and should be exclusive? If not, can machines at some point be considered as human as well?
The ethical point of view on Artificial Intelligence is a complicated topic for sure, but one we have to deal with sooner or later. The big question now is: Should AI become human-like?
“Human-Like AI”? What’s that?
Before we can even try to answer this question, we need to define what human-like AI actually is. Artificial Intelligence can be differentiated between weak AI and strong AI. Weak AI automates tasks and processes normally performed by humans. They don’t require “intelligent thinking” to function. Strong AI, on the other hand, simulates human intelligence and thinking processes, is able of making decisions and can even create new things. Human-Like AI could be defined as the next or final level of strong AI.
At this point, AI is highly dependable on its programmers input. However, the goal of human-like AI is to become self-autonomous and independent, just like a human. It should learn and make decisions by itself, without external help. Human-like AI does not necessarily have to look like a human. Its the inner values that count. There are several characteristics that make living beings, and especially humans, unique. For example, self-awareness, emotions, the ability to learn, consciousness, common sense, free will, personality, imagination, free thinking, creativity, ambitions, moral standards, and culture.
Implementing these characteristics into a machine sounds hardly possible at the moment. But with the quick advancement of AI, it might become reality at some point. However, this will be far, far into the future. A good example for the current development status of strong AI, which is available to the public, are virtual assistants such as Google Assistant. Besides managing your daily activities, it can even make appointments for you over the phone. And it just sounds and behaves like a real human.
An Ethical Dilemma
Human-like AI has a negative tone to it. It is a huge ethical question whether human intelligence should be even created in the first place. We would literally play gods and interfere with the laws of nature. Furthermore, the question of how we should treat human-like AI remains unanswered.
If robots should get human-like emotions and ‘feel’ happiness, sadness, envy, or emotional pain, should they get the same rights as humans do? Is ‘killing’ a human-like robot considered as murder or damage to property? Do we stay in control of the machines or do they get a free will and make their own decisions? These questions are very difficult to answer and a major reason against human-like AI.
Scientists are already facing other ethical dilemmas in the current development of AI. In a previous blog post about the opportunities and threats of AI, we talked about a self-driving car accident, which resulted in killing a pedestrian. The question now is: How should the AI of the self-driving car ‘decide’ in emergency situations? Imagine a situation where the car gets into an accident for sure. It can either crash into an elderly couple or a 6-year old girl passing the street. Should it use rational or ethical reasoning? Saving 2 people rather than one, or saving a young life rather than 2 old ones? We can’t answer this question and hopefully we will never have to. But AI has to be programmed to react ‘right’ in these situations. What to do…
A Bright Future?
Obviously, human-like AI also has a lot of benefits to offer. Otherwise, scientists all over the world wouldn’t work day and night trying to create it. The benefits of machines that think and behave like a human are enormous – as long as we have control over them. This is why the AI industry leaders Amazon, Apple, DeepMind, Google, IBM, and Microsoft, created the Partnership on AI to Benefit People and Society. The purpose is to prevent the misuse of Artificial Intelligence and guarantee that it will solely be used for socially beneficial purposes.
Human-like AI could lead humanity into a prosperous future. Especially in areas like the healthcare sector AI is already able to take over complex tasks that normally require human intelligence. It can assist in operations, diagnose diseases with a high accuracy, and create individually customized medicine. In general, it can take over the majority of jobs and improve them even further. What sounds negative at first, is actually positive. If done right, a realistic implementation of a universal basic income could finally be possible. We would only work if we want to and enjoy life to the fullest. Sounds great, doesn’t it?
Should AI become human-like? We say yes – but only to a certain degree. We see the potential and benefits of human-like AI and how it can improve our overall quality of life by giving us more freedom to enjoy it. But creating self-aware, emotional and free-thinking ‘beings’ and having complete control over them – that’s too much. The responsibility is too heavy for us. It will create major ethical conflicts, split societies and cause chaos. Humanity is what distinguishes us from the rest and should be exclusive to humans.
But this is only our opinion. What do you think? Is it ethical to implement human intelligence into machines? Tell us in the comments. We are eager to hear your opinion.
Latest posts by Startup Creator (see all)
- How Google’s TensorFlow enables AI for Small Businesses - 8. August 2018
- The Huge Potential of AI’s Emotion Detection - 1. August 2018
- The Big Ethical Question: Should AI Become Human-Like? - 25. July 2018