Mapping the world's opinions

argument top image

Will artificial intelligence lead to the end of humankind? Show more Show less

The term artificial intelligence (AI) was coined back in 1956. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons, and it's progressing rapidly. AI today is properly known as narrow AI (or weak AI), as it is designed to perform a narrow task (e.g. facial recognition or driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans specific tasks, AGI would outperform humans at nearly every cognitive task.

No, artificial intelligence will not end humankind Show more Show less

AI will never have the intelligence and consciousness of a human.
< Previous (2 of 2 Positions)

Superintelligent AI cannot be achieved

How can we engineer something that we cannot even define? In all of human history, we never managed to work out what natural human intelligence is, so it is not clear what engineers are trying to imitate in machines. Rather than intelligence being a single, physical parameter, there are many types of intelligence, including: emotional, musical, sporting and mathematical intelligences.
< Previous (1 of 2 Arguments) Next >


The Argument

AI is parasitic on human intelligence. It indiscriminately gorges on whatever has been produced by human creators and extracts the patterns—including some of our most detrimental habits. These machines do not have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals. Humans will always have to control AI in order for it to accomplish tasks. AI cannot think for itself, thus, cannot and will not end humankind. They are helpless, in the sense of not being agents at all. They do not have the capacity to be “moved by reasons” presented to them.[1] In the long term, ASI or artificial super intelligence, it’s possible in principle but not desirable for them to be programmed with self-motivation. The far more constrained AI that’s practically possible today is not necessarily evil. But it poses its own set of dangers—chiefly that it might be mistaken for strong AI.

Counter arguments



[P1] AI cannot encompass emotional or reflective intelligence like humans. [P2] AI needs humans to operate their systems, or else they are empty agents.

Rejecting the premises


Further Reading



Explore related arguments

This page was last edited on Tuesday, 24 Mar 2020 at 10:49 UTC