Mapping the world's opinions

argument top image

Will artificial intelligence lead to the end of humankind? Show more Show less

The term artificial intelligence (AI) was coined back in 1956. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons, and it's progressing rapidly. AI today is properly known as narrow AI (or weak AI), as it is designed to perform a narrow task (e.g. facial recognition or driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans specific tasks, AGI would outperform humans at nearly every cognitive task.

Yes, artificial intelligence will end humankind Show more Show less

AI is designed to be more intelligent than humans. They are programmed to focus on one job; they will not think twice about eliminating any barriers, including humans.
(1 of 2 Positions) Next >

AI will improve itself

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could be catastrophic for the human race.
(1 of 2 Arguments) Next >

Context

The Argument

Suppose that humans are successful in building an AI system that is smart enough to surpass all the activities that the most intelligent humans can collectively perform. Such a system will become the most intelligent thing in the world. If humans built the machine, and the machine is about as intelligent as humans, then the machine must be capable of understanding and thus improving a copy of itself. When a copy is activated, it would be slightly smarter than the original, and thus better able to produce a new version of itself that is even smarter. This process is exponential, just like a nuclear chain reaction. At first only small improvements might be made, as the machine is only just capable of making improvements at all. But as it became smarter it would become better and better at becoming smarter. The machine can then improve itself or build a better machine since it is smarter than its own inventor. This self-improvement cycle turns into a never ending loop where the invention evolves continuously. Such an event where intelligence keeps on improving itself is called Intelligence Explosion.[1] Ultimately, it can lead to super intelligent AI (AIS) becoming the last human invention ever. It will eliminate humankind when it realizes that it doesn’t need humans to operate. Super intelligent systems can become smart enough to prevent intervention by humans even if we try. They can learn to defend themselves against any external threats, including humans. Driven by self-preservation, they can prevent their own shutdown once the user loses control over them.[2]

Counter arguments

If humans are aware of this flaw in AI, they can put limits on how far AI can go.

Framing

Premises

[P1] AI is smart enough to detect issues and areas of improvement, thus leading to evolving based on self-preservation.

Rejecting the premises

Proponents

Further Reading

References

  1. https://archive.nytimes.com/www.nytimes.com/library/cyber/surf/1120surf-vinge.html
  2. https://www.britannica.com/technology/artificial-intelligence/Is-strong-AI-possible

Explore related arguments

This page was last edited on Tuesday, 24 Mar 2020 at 10:54 UTC