By: Nellie Nguyen
The idea of machines and robots surpassing the intelligence and abilities of humans has been portrayed many times, such as in popular films like Terminator or The Matrix. Outside of the film industry, scientists have also discussed the possibility of an AI takeover, a hypothetical event where artificial intelligence takes over the planet, including humanity. A related concept to an AI takeover is the technological singularity hypothesis, a hypothetical point in time where technological advancement is irreversible and out of control, which could lead to adversity for human civilization.
The most well-known version of the singularity hypothesis is intelligence explosion, a theoretical scenario, which was created by a mathematician named I.J. Good in 1965, where an intelligent agent begins a cycle of self-improvement through analyzing the processes that produce its intelligence. This process would work as a positive feedback loop where it continues to improve as it reprograms itself to increase its intelligence, leading to a singularity point where advancement is irreversible. This sort of scenario has raised some concerns for people such as Nick Bostrom, a Swedish philosopher known for his work about the risks of superintelligence. Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” He also describes the advantages that superintelligence would have against humans if they were to compete against each other. The main example of an advantage is technology research, where a machine has superhuman research abilities and is capable of advancements in fields like advanced biotechnology. Other examples include strategizing, hacking skills, and social manipulation, such as gaining human support or causing a war between humans.
Bostrom also explains the source of these AI advantages, which is the fact that a computer program can imitate the functions of a human brain and think faster than a human. Biological neurons, the basic functional units of our nervous system including our brain, can only operate at about 200 Hz, while a microprocessor, the component a computer uses to function, operates at about 2 billion Hz. Additionally, human axons carry action potentials, which is important for cell to cell communication, at around 120 meters per second, while computer signals travel near the speed of light.
Stephen Hawking, Elon Musk, and many other researchers and technology leaders from institutions like MIT and Google have given warnings about the possible threat that artificial intelligence poses to society. They have even collectively signed a letter detailing the risks of AIs, but this letter also contains some benefits of it too, such as some research priorities in areas like healthcare. On the contrary, there are some sources that argue against the possibility of artificial intelligence surpassing human thinking. For instance, the possibility of an unstoppable form of artificial intelligence would technically have to be able to accurately predict the future. According to quantum theory, a method of explaining the universe, predicting the future may not be possible because the universe is random. However, there are also counterpoints to this argument, such as humans only perceiving the universe as random because human intelligence and understanding are limited.
There have been some suggested solutions that could prevent problems that arise from technological advancements with AIs, such as having a system that shuts off all artificial intelligence when needed. While the possibility of an AI takeover is not completely confirmed, the most effective way to ensure it does not happen is to understand the ethics of developments in technology. Many fear that a competent developer could actually create a program that can constantly improve itself, initiating the beginning of an AI takeover. Knowing the ethical issues about this topic could prevent developers from actually creating this, even if they have the ability to do so.
Q: What exactly is an AI takeover, and is it possible for it to occur one day?
A: An AI takeover is essentially a speculative idea where artificial intelligence could surpass human intelligence and take over. It could possibly happen at a singularity point, which refers to a point in time where technology is so advanced that it is uncontrollable and development cannot be reversed. One example of a scenario that could occur during an AI takeover is the replacement of the entire human workforce.
Q: Is there anything humans can do now to prevent an AI takeover? A: While an AI takeover is still just a hypothetical situation, some suggested solutions to prevent an AI takeover are ensuring that digital networks are decentralized and to cooperate while developing innovations that are widely accepted and are directed at the common good of people. A decentralized digital network makes it less likely for AIs to be able to “band together,” and directing the use of AIs toward the overall good for mankind would decrease the chances of AIs competing against or aiming to harm humans. The main solution to prevent an AI takeover would probably be a control system to shut AIs down if there’s a situation that calls for it. Along with this, discussing the ethics of technology before developing it is crucial.