How can AI be dangerous?

How dangerous can AI be?

HOW CAN AI BE DANGEROUS?

The AI is programmed to do something devastating: Autonomous weapons are systems of artificial intelligence programmed to kill. Those weapons could easily cause mass casualties in the hands of the wrong person. In addition, an AI arms race might inadvertently lead to an AI war, which would also result in mass casualties. These weapons would be designed to be extremely difficult to simply "turn off" to avoid being thwarted by the enemy, so humans could plausibly lose control of such a situation. This risk is one that even with narrow AI is present but is growing as AI intelligence and autonomy levels increase.

The AI is programmed to do something beneficial, but it develops a destructive method to achieve its goal: this can happen whenever we fail to fully align the goals of the AI with ours, which is very difficult to achieve. When you ask an obedient smart car to take you to the airport as quickly as possible, it could get you there chased by helicopters and covered in vomit, not doing what you wanted but simply doing what you asked for. If an ambitious geoengineering project is tasked with a superintelligent system, it could wreak havoc with our ecosystem as a side-effect, and view human attempts to stop it as a threat to be met.

WHY THE RECENT INTEREST IN AI SAFETY?

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other major science and technology names have recently expressed concern in the media and through open letters about AI's risks, along with many leading AI researchers. Why is the subject in the headlines suddenly?

The notion that eventually the search for powerful AI will succeed was long thought of as science fiction, decades away or more. Yet thanks to recent breakthroughs, many AI thresholds have now been met, which experts saw only five years ago, making other experts take the prospect of superintelligence seriously in our lifetime.Perhaps the best example of what we can face is our own evolution. People are now controlling the planet, not because we are the strongest, fastest or biggest, but because we are the most intelligent. If we are no longer the cleverest, are we confident that we will remain in control?


The FLI stance is that as long as we win the battle between the increasing power of technology and the wisdom with which we handle it, our society will flourish. In the case of AI technology, the stance of FLI is that the best way to win this race is not to hamper the former, but to speed up the latter by promoting AI safety work.




Leave a comment

Please note, comments must be approved before they are published