“I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.” -Claude Shannon
“The Father of Information Theory” signaled the power of Artificial Intelligence decades ago and we are witnessing it now. We are not aware but most of our everyday tasks are influenced by Artificial Intelligence like credit card transactions, using GPS in our vehicles, personal assistance by various apps in our smartphones or online customer support by chatbots, and who can ignore the smart cars of Google which are very close to reality. But, are we sure that in future, the developments in Artificial Intelligence will not be the biggest danger to us.
Today, this complex programming which is Weak AI is replicating the intelligence of human beings and is outperforming humans in specific tasks. In future, with the evolution of Strong AI, nearly every task of humans will be outperformed by Artificial Intelligence. The work and job which define our identity and lifestyle will be passed to robots. There is no doubt that AI has the potentiality to be more intelligent than us but we can’t predict how it will behave in the coming time.
At present, nobody in this world knows whether Strong Artificial Intelligence will be beneficial or harmful to mankind. There is one group of experts who believe that Strong AI or Superintelligence will help us in eradicating war, diseases and poverty. On the other hand, some experts believe that it can be criminally used to develop autonomous weapons for killing humans. They are also concerned about AI which on its own may develop some destructive methods to achieve the goals.
Some people suggest that Artificial Intelligence can be managed like Nuclear weapons, but this comparison in itself is not wise. Nuclear weapons require rarely found raw materials like uranium and plutonium, whereas AI is basically a software. When the computers will be powerful enough, anybody knowing the procedure to write the relevant code could create Artificial Intelligence anywhere.
The most prominent persons from the tech world such as Bill Gates, Elon Musk and the great scientist Stephen Hawkins have already expressed concerns about the future transformation of Artificial Intelligence. They are not wrong in considering AI as the biggest existential threat because we are already dependent on smart systems and in future, this dependency will only increase.
What we could confront in the time ahead might be our own evolution. We control the globe because we are the smartest. When we won’t be the smartest, could we retain the control? One solution visible today is to research and be prepared for any potential negative future outcomes. This will help us to avoid any pitfalls and enjoy the benefits of AI.
Comments are closed.