WHY AI WON’T LEAD TO ‘THE TERMINATOR’?
In real life, we might not need Arnold Schwarzenegger.
Listening to the facts that we’re currently in the age of Artificial Intelligence, every intellectuals, enthusiasts, and thinkers have imagined about the dangers of AI, that they’ll create threats to humans like in the movie ‘The Terminator’. So, is this threat possible?
Before analyzing it, we need to know what AI is. AI is a system that has the intelligence to solve the problem of a particular type. Whenever the system is given a goal, it must be able to break the problem into different small parts, solve those small parts, observe the environment to increase the success rate to meet the goal of the problem. And with given appropriate training, that system must also be able to perform or solve tasks of different types of domains. And the system should be able to observe the environment and adapt itself to execute appropriate action so that it no longer requires programming.
So, AI, as well as our brain, defines intelligence as a product of learning. Computer scientists are trying to use neocortical systems for AI based on the neocortex of the mind so that AI systems can become intelligent. What it uses is the repetitive process of machine learning and deep learning so that it observes the machine and its environment to adapt to the changes. With the use of the neocortical system, they will be able to learn to improve their intelligence just like our mind. Thus, a large, small, or fast brain all have to learn to gain intelligence. Yes, it is obviously quicker in AI systems to learn. But getting beyond humans is undoubtedly a highly challenging task for them. Cause then, they need to go through slow discovery processes just like the human mind goes through when we want to discover. Imagine we could create a human who thinks ten times faster than a normal human being. So, would he/she be able to extend his/her knowledge at a rate ten times faster than normal humans? For some fixed domains, it could be possible. However, for the rest of the vast types of fields, they’d still need to design the problems, experiment on it, collect data, and then bring in a conclusion. So, if an AI system tried to get beyond human knowledge and power, it needs to design and experiment with new things itself, which is impossible for Artificial Intelligence systems in the near future.
The next thing is that we react to potential threats based on how far in time are the risks possible. For example, it is researched that earth might be uninhabitable in 150 million years due to the small increase in the heat of the sun each year. But we are not concerned about it because it is too far in the future. But we are worried about the gossips that the earth can get uninhabitable in a century due to human impacts on the environment. Similarly, machine intelligence also possesses some threat in the near future, but some too far in the future that we are not even capable of imagining it right now.
So, it’s not that AI systems won’t have its negative side as even the earliest form of technology has its good and bad hands. Artificial Intelligence will need a proper policy to accelerate its positive impacts and decrease harmful applications. But the threats of machine intelligence are no different than usual threats we have been facing from technologies in the past, nothing terrifying like AI taking over humans.
Thus, Instead of shortening our lifetime by getting beyond our power, machine intelligence will help us increase it by gaining vast knowledge and creating amazing tools. Rather than cursing, the future will thank us for creating Artificial Intelligence systems. AI taking over humans by getting beyond our power like it, the movie ‘The Terminator’ is totally impossible so, there will be no need of Arnold Schwarzenegger’s heroic ‘Hasta la vista, baby!’ in real life as in the movie series ‘The Terminator ‘.
Author: Subal Ghimire