“I’m visualizing a moment when we’re going to show robots how dogs are to humans, and I’m looking for machines.” -Claude Shannon
“The father of information theory” showed the power of artificial intelligence decades ago and we are seeing it now. We are not aware, but most of our daily tasks are influenced by artificial intelligence such as credit card transactions, use of GPS in our vehicles, personal assistance provided by various applications of our smartphones or online customer support via chatbots. which are very close to reality. But are we sure that in the future, developments in artificial intelligence will not pose the greatest danger to us.
Today, this complex programming that is weak AI reproduces the intelligence of human beings and surpasses humans in specific tasks. In the future, with the evolution of strong AI, artificial intelligence will surpass almost all human tasks. The work and the work that defines our identity and our lifestyle will be transmitted to the robots. There is no doubt that AI has the potential to be smarter than us, but we can not predict how it will behave in the future.
At present, no one in the world knows if a strong artificial intelligence will be beneficial or harmful to humanity. There is a group of experts who believe that strong artificial intelligence or superintelligence will help us eradicate war, disease and poverty. In addition, some experts believe that it can be used criminally to develop autonomous weapons to kill human beings. They are also concerned about AI, which alone can develop destructive methods to achieve goals.
Some people suggest that artificial intelligence can be managed as a nuclear weapon, but this comparison in itself is not wise. Nuclear weapons rarely need raw materials like uranium and plutonium, while artificial intelligence is essentially software. When computers are powerful enough, anyone who knows how to write the appropriate code could create artificial intelligence anywhere.
Leading figures in the world of technology such as Bill Gates, Elon Musk and the great scientist Stephen Hawkins have already voiced concerns about the future transformation of artificial intelligence. They are not wrong in viewing AI as the greatest existential threat because we already depend on intelligent systems and this dependence will only increase in the future.
What we could face in the future could be our own evolution. We control the globe because we are the smartest. Can we maintain control when we are not the smartest? A visible solution today is to look for and be ready for any future potential negative outcome. This will help us avoid the pitfalls and benefits of AI.