In 2015, late physicist Stephen Hawking warned us about the threat to humanity with the rise of superintelligence. He said that the robots and machinery we’re building today could end humanity in the future. He further added that a superintelligent robot would be much profound to meet its goals, and if their goals won’t align with our ones, then it would be dangerous.
A similar voice was also raised by Elon Musk. Despite his passion for technology, when it comes to artificial intelligence, he called it humanity’s biggest existential threat. According to him, creating superintelligence is like calling the demon to kill us.
More scientists like them think the same. You must be thinking that why are they so concerned about the superintelligence? The technology which took us to the new standards of achievements can kill us or maybe they can’t. So, let’s find out what if we created the superintelligence?
Superintelligence: What is it?
Suppose you have a mini-robot that does everything on your instruction. It does your homework, prepares delicious food for you, and even sometimes, becomes your pubg partner. – That’s Artificial Intelligence.
Now, suppose what would happen if your mini-robot starts to think like a human brain? It would have more intelligence than you and could do things by using its mind, even without your instructions. It would create a laboratory in your room and start manufacturing its Robot-buddies, and finally controls your home. – That’s Superintelligence.
SUPERINTELLIGENCE or Artificial General Intelligence is the most advanced form of Artificial Intelligence, which is capable of exceeding the level of human biological intelligence. It will be capable of doing the things which humans can’t even imagine, even without his control.
Sci-fi movies like Robot (2010) and I, Robot (2014) represented a future image of superintelligent robots in which they controlled the humans, and had been a threat to humanity. So, do you think that these sci-fi movies could be a reality for the future? Or we are just looking at things with a sci-fi perspective. Let’s check it out!
Threats to Humanity
Many of the world’s scientists and technological researchers think, ‘superintelligence is a threat to humanity.’ Superintelligence is something that will come for sure and wipe out the existence of humans. ‘It represents the highest risk for humanity.’ Why is it so? – We know it will likely happen by 2029, according to Ray Kurzweil, an American inventor, and futurist. It won’t be like natural pandemics that have an end, but like an era that would rule for centuries.
No need for humans
The technology created by humans would no longer need them to function. Superintelligent AI would be so intelligent that it could think and create things by itself, and more efficient than humans. We would be like pieces of trash for them.
End of Economy and Business
AWS (Amazon Web services, Cloud computing) business, in their survey for artificial intelligence, someone said that the robots wouldn’t need money to purchase things for them. They could rearrange atoms for creating objects. So, the economy would die. Also, superintelligence would replace humans would definitely lose their jobs and income. It would make us resourceless.
No place left for human civilization
With the superintelligence, everything would be taken over by superintelligent robots. There would be no resources, money, social life, government, and economy left for humans. In fact, they would leave us with nothing to do for a living, not even food.
Humans could be a slave for his survival
To live with superintelligent AI robots, the human has to align their goals with them, according to Stephen Hawking. They could kill humans if they hate them or even for their benefits. Our well-being wouldn’t be their concern. They could restrict humans to move freely, to do agricultural activities, or maybe they chose to perform tests on our bodies and minds. In short, robots would decide what to do with us.
They would play with our psychological reactions to develop emotional intelligence in them. They would torture us and play with our bodies and mind to fulfill their needs. Superintelligence would rule so fast that humans would have no time to prepare themselves for harsh survival. Also, we wouldn’t have resources for doing that.
Earth would die
Human bodies need biological forms and an inhabitable environment for living. A human can’t survive without nature, but the robots don’t need such a biosphere. Therefore, they would destruct the environment. In fact, they can manipulate the structure of atoms and could create new objects. They could use human bodies for such things. In the future, the earth would die, and so do we.
The Other Side
Other than the destructive side of the superintelligence, Ray Kurzweil doesn’t think like that. He believes that superintelligence will be a chance for humans to improve. The machines will empower our capabilities. According to other researchers, superintelligence will never exceed human intelligence. It is just a myth we have perceived from sci-fi.
A possibility to live with AGI Robots
The only way to keep us safe from the AI is to make sure that they would like us. We’ve to keep them like our pets or a friend. What we need to do is to spend time with them, give them company, and play with them, like our beloved. Maybe, that would be the best for the future of unparalleled human-machine coordination.
The plausible way of competing with this superintelligence is to transform our body with the technology – thus, becoming cyborgs. We could adopt a different kind of evolution which will include AI in us. But it will be a big challenge as well.
Kurzweil has already confirmed that the AI will match the human brain by 2029. The process for this has already begun. Even we can’t stop this rapid exponentially increasing progress. We don’t know what it will bring for us, exactly. Maybe, we have to shift to another planet like Mars before the existence of superintelligence, or we could live with them comfortably, or it would be our end.