Safe AI and dangerous AI: How would the fulcrum of AI change in the future?

Now Today Trending

Introduction

The canvas of artificial intelligence is both deep and wide. The width of this canvas can be understood from the range of applications like self-driving cars, chatbots, and augmented reality. The depth of this canvas can be gauged from search algorithms of Google to autonomous weapons which are remotely controlled. The ocean of applications is such that it is difficult to put a cap on its range. Nevertheless, the artificial intelligence in action that we are witnessing today is called narrow AI or weak AI. This AI is designed to execute partially autonomous tasks like human-robot interaction. On the other hand, we are looking at the age of artificial intelligence which is yet to come. Researchers refer to this age as the age of general AI. The prime difference between narrow AI and general AI is that the latter can dominate humans in different types of cognitive tasks.

Safe AI

One of the potential questions that relate to artificial intelligence is the dominance of strong AI in the long run. If strong AI is to perform all cognitive tasks independently and better than humans, the time is not far when it goes beyond the control of humans. An autonomous system would undergo repeated improvements in its architecture which will lead to a super-intelligent system capable of wiping out the human race. To prevent artificial intelligence from going that far, we put forward the concept of safe AI. Safe AI is concerned with designing  new applications that are capable of solving human problems of disease and poverty and benefiting a larger circle of sapiens. Safe AI recognizes the barriers of artificial intelligence which can prevent systems from becoming super intelligent.

Dangerous AI

Dangerous AI refers to AI which is likely to exhibit all the capabilities of humans including human emotions and cognitive skills. This genre of artificial intelligence would be super intelligent but harmful as well. This is primarily due to too many reasons. Firstly, this can be controlled to trigger devastating wars and casualties. This can be used to control lethal arms remotely. It could lead to the development of AI weapons that would be extremely difficult to turn off. It could give life to lethal weapons and such weapons would become autonomous and cause havoc beyond imagination. Secondly, this type of artificial intelligence is capable of adopting a destructive method for completing a task. For instance, an artificial intelligence system with malware installed in its programming system may become unstoppable to the extent that it causes unprecedented destruction.

Educating with AI

Various types of applied AI courses need to be incorporated into our educational systems. In the coming times, it is difficult to imagine a system that is partially or completely devoid of artificial intelligence. So, educating the pupils with AI would mean the advocacy of safe AI and related practices.

Concluding remarks

There are a number of benefits as well as risks associated with artificial intelligence. We need to develop laws and adopt practices that promote the use of safe AI and curb the use of dangerous AI in the times to come.