Postimage

Background

Without a doubt, Artificial Intelligence is one of the most discussed topics right now. With the rise of self-driving cars, virtual assistants and even humanoid robots, AI is becoming part of our everyday life. At the same time, AI is getting more powerful due to better hardware and research. Even though AI is programmed to do something beneficial, many people are afraid of AI and even believe that we are going to die because of it. Opinions vary widely and they range from “AI can save our world” to “AI will kill us all” or even to “AI will become God”.

In this blog post we will try to elucidate the question: “is AI helpful or harmful”. Obviously, there is no way to summarize such a broad topic in just one blog post. That’s why we will focus on our opinion and try to keep it short. We also have to mention, that we are still new to this topic, so if there is anything you want to add or you disagree with, we would love to read your opinion in the comments below. Most of the sources that we used can be found here: Discussion paper: What’s wrong with AI?.

On top of that, there will be a second part of this blog after the first semester of AI1 at ZHAW – hopefully with more insights! We hope you can enjoy reading the first part!

AI is progressing at an unpredictable rate

To get started, we must think about what makes people afraid of AI. You have probably seen this scenario in movies, where AI is programmed to do something useful, but the AI is progressing too quickly to the point, where it stops listening to humans and tries to kill us.

If you look at past AI systems for playing strategic games, you can tell how much progress has been made in the last 20 years. For instance, Google’s AlphaGo system beat the 18-time world champion Lee Sedol in Go back in 2016. This was about 10 to 20 years earlier than many experts expected this to happen. Since we simply cannot predict, how fast AI is progressing, being concerned is a natural reaction. But it gets even more interesting with AlphaZero, which managed to outperform AlphaGo without human training - only by playing Go against itself. The fast growth of AI really makes us wonder, if and when AI will get to the point where it doesn’t need us humans anymore and just disposes of us (there are some good reasons for that, e.g. humans are causing global warming).

How intelligent is AI?

It’s obvious that these AI systems perform exceptionally at one task. However, they are still not able to feel emotions, so humans are still more intelligent, right? Unfortunately, there is no clear definition for intelligence which satisfies everyone. But we know that AI doesn’t have consciousness yet and for programmers it is difficult if not impossible to replicate such a state since we still don’t fully understand our brain with its billions of nerves. So before AI will reach general artificial intelligence, we first have to understand our brain and how consciousness works. And as for now, since most AI systems are limited to specific problem solving tasks, there is according to our knowledge no reason to be afraid that AI systems will attack us humans from behind.

Using AI to manipulate us

AI becoming super intelligent and evil is one concern, but what about current AI, which is used to influence our behavior already? With AI gathering our personal information, it is proven that these data can be used to manipulate us. Amazon, Google and other companies have been using machine learning algorithms to recommend products in order to increase sales for years, and it is clearly working very well. Selling a product might not be a severe threat to the human species, but what if AI would be used for political reasons? It is already reality that our data has not only been used for marketing, but also to manipulate elections (Newspaper article: Facebook political manipulation). And with the revelations of Edward Snowden, we know that we have been targets of mass surveillance (Newspaper article: Edward Snowden: Leaks that exposed US spy programme). Imagine your personal data could be used to predict your next actions or maybe catch criminals before they commit a crime? This could potentially threaten human freedom. When humans use AI for selfish reasons, AI becomes an extremely dangerous weapon instead of something valuable to our society.

Decision-making

AI can find paths to solutions that no humans would be capable of, which makes them ideal to help with the decision-making in complex tasks. This makes room for humans to focus on more important matters. However, we cannot leave the whole decision making to AI, while humans lie in the sun. AI is not always trustable. Not because they are evil, but because they are typically trained with data of the past and this can be extremely problematic. According to this article a computer program has been developed to sort out applications of students at a medical school. The training data for the program were admission files from earlier years. It turned out that the program discriminated against women and against people with an immigrant background, because the program reproduced the bias from the data it was trained with.

How much can we trust AI?

Since we train AI with data from the past, AI can be biased. That means we can not trust AI blindly. And since AI does not have any self-awareness yet, there is no way that it finds out by itself that their decisions could be wrong or racist. So if AI learns from us, we should not be surprised that it makes the same mistakes. We think the responsibility lies with humankind that AI becomes trustable. Almost similar to raising a child, we have to teach it what is right and what is not. But even if it becomes fully unbiased, there is another important point which has to be considered, namely ehtical decisions. The following scenario is often discussed: an autonomous vehicle cannot brake quickly enough, and one person or another will die. With the current technologies such as pattern recognition, an AI system is easily able to distinguish between race, age, gender and other properties. The questions is how should the system decide which one to save? Should the vehicle save the young driver and sacrifice an old pedestrian? When we leave decisions where people’s lives are at stake to AI, the idea that AI could potentially become God is not that far-fetched, but AI should not have this much power in our world. In our opinion, the idea that AI will become all-knowing and able to anticipate the future of our universe will never be possible. If you take quantum mechanics as an example where measurements of some particles are fundamentally unpredictable, this would mean that our universe is ultimately chaotic and unpredictable. AI’s intelligence would therefore eventually reach a point where it can no longer further increase in intelligence.

Conclusion

In our opinion, AI only becomes harmful if we make it harmful. There are numerous ways to abuse AI and make it a threat to humans but the idea that AI silently becomes evil and kills us is not realistic at all. In order for this to happen, AI needs to achieve self-consciousness. And we believe that this could still take quite some time, if we compare it for example with the human evolution where it took us millions of years until we were able to speak. Of course, we are not assuming that it will take the same amount of time, but AI reaching self-consciousness is definitely noticeable for humans. In biology for example, evolution is an active process where humans, plants, animals are adapting all the time. So if researchers are able to find even recent plant or human adaptions, we are pretty sure that we can notice AI achieving self-consciousness. As for today, there are others risks that we should take precautions against first e.g. that AI becomes too much involved in decision-making (and makes the wrong decision) or can be used for selfish reasons, but it is our responsibility that scenarios like these are prevented. To conclude, we still think AI is worth researching and we believe it will be even more helpful in the future - even if it doesn’t reach general intelligence or superintelligence. AI can already help with solving problems or making decisions in fields which are already too complex or too demanding for us humans even without cognitive functions. There are various useful examples like autonomous vehicles, plagiarism checkers and spam filters, so we will definitely be missing out if we don’t keep researching it.