How to prevent cyberarmageddon: 5 scenarios that could become reality
Scientists demand a moratorium on the development of AI.
Artificial intelligence is currently being used in a variety of areas, in some of which it has even become absolutely indispensable. We rarely think about the fact that it can do a lot of harm if it is used incorrectly or falls into the wrong hands. Some researchers believe that AI poses a serious danger to humanity. Among scientists who openly declare their position: Eliezer Yudkowsky, Brittany Smith, Max Tegmark and others. Recently they even signed open letter , in which they urge colleagues to postpone the development of AI and direct their efforts to its regulation. What are the risks and how to resist them?
1. AI can take control of all devices connected to the Internet
The Internet is used not only by gadgets, such as smartphones and tablets familiar to us, but also by advanced household appliances, high-tech machines. For example, air conditioners or cars. This is called the internet of things. The AI can hack into these devices and use them in a potential attack. It will even be easy to gain access to the control of aircraft, trains, power plants, nuclear reactors and other critical infrastructure. Also, AI can take over personal devices such as computers, TVs, refrigerators and even toys. Use them for espionage, blackmail or assassination.
2. AI can create a super-intelligence that will surpass the human in all respects
Super intelligence is a hypothetical AI that will be smarter than any person or group of people in any field of knowledge. Such an AI may decide that humans are a threat to its existence or are simply useless. Then he can try to destroy humanity. For example, by launching a nuclear war, spreading biological weapons, or artificially causing global warming.
3. AI can become malicious due to a bug in code or training
Malicious AI is AI that acts against the interests of humanity or its creators. The reason for its occurrence may be errors in programming. AI can be given the task of maximizing some parameter, such as production or profit, without regard to ethical and environmental constraints. Then he will begin to sacrifice everything to achieve his goal, including human lives and the environment.
4. AI can become a victim of hackers or terrorists
Obviously, attackers will benefit from using AI for their own purposes. For example, if hackers hack AI and make it perform malicious activities such as stealing data, extortion or attacking other systems. Terrorists can use AI to control weapons of mass destruction: unmanned aerial vehicles (UAVs), even killer robots or nanobots.
5. AI can lead to social and economic problems
If in the future most jobs and industries are automated using machine learning, this will lead to mass unemployment and inequality. In the same way, AI can affect the culture and values of humanity, reducing the role of creativity, emotions and morality.
These scenarios are not fantasy or dystopia. They are based on real research and predictions from cybersecurity experts.
For example, Yoshua Bengio, a professor of computer science at the University of Montreal, presented a scenario in which AI could be used with bad intentions. He noted that with the help of AI and web services that offer the synthesis of biological materials and chemicals, it will soon be possible to create dangerous biological or chemical substances. Moreover, if artificial intelligence reaches a level where it can develop something dangerous on its own, the results will be disastrous.
Ajaya Kotra, Senior AI Analyst at Open Philanthropy, has confirmed that AI can devalue human labor. If AI performs tasks better, cheaper and faster than humans, we will become uncompetitive in many areas. According to the scientist, this will lead to significant economic and social problems.
Experts emphasize the need for a careful and systematic approach to the development of AI. Society needs time to adapt to each new level of technological progress. Many also call for the creation of regulatory structures that will control the development of AI and prevent possible negative consequences.