Will artificial intelligence lead to the end of humankind?

The term artificial intelligence (AI) was coined back in 1956. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons, and it's progressing rapidly. AI today is properly known as narrow AI (or weak AI), as it is designed to perform a narrow task (e.g. facial recognition or driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans specific tasks, AGI would outperform humans at nearly every cognitive task.

Yes, artificial intelligence will end humankind

AI is designed to be more intelligent than humans. They are programmed to focus on one job; they will not think twice about eliminating any barriers, including humans.

AI will improve itself

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could be catastrophic for the human race.

Humankind will get in the way of an AI’s goals

A popular example is called the paperclip maximizer hypothesis, which was popularized by AI thinker Nick Bostrom. Imagine we gave an ASI (Artificial Super Intelligence) the simple task of maximizing paper clips...

No, artificial intelligence will not end humankind

AI will never have the intelligence and consciousness of a human.

Superintelligent AI cannot be achieved

How can we engineer something that we cannot even define? In all of human history, we never managed to work out what natural human intelligence is, so it is not clear what engineers are trying to imitate in machines. Rather than intelligence being a single, physical parameter, there are many types of intelligence, including: emotional, musical, sporting and mathematical intelligences.

AI lacks consciousness

The Chinese Room argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding any interaction.
Explore this question in a whole new way.
This page was last edited on Tuesday, 24 Mar 2020 at 10:46 UTC