argument top image

Will artificial intelligence lead to the end of humankind? Show more Show less
Back to question

The term artificial intelligence (AI) was coined back in 1956. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons, and it's progressing rapidly. AI today is properly known as narrow AI (or weak AI), as it is designed to perform a narrow task (e.g. facial recognition or driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans specific tasks, AGI would outperform humans at nearly every cognitive task.

Yes, artificial intelligence will end humankind Show more Show less

AI is designed to be more intelligent than humans. They are programmed to focus on one job; they will not think twice about eliminating any barriers, including humans.
(1 of 2) Next position >

Humankind will get in the way of an AI’s goals

A popular example is called the paperclip maximizer hypothesis, which was popularized by AI thinker Nick Bostrom. Imagine we gave an ASI (Artificial Super Intelligence) the simple task of maximizing paper clips...
< (2 of 2) Next argument >

The Argument

An ASI is super intelligent; it can think, create, and do things many humans can’t even comprehend.[1] Carbon is one of the most abundant elements in our galaxy; it’s a fundamental building block for nearly everything, including humans and paperclips. ASI, in theory, would create a method of paperclip production by pulling carbon directly from the atmosphere into its paperclip machine. Because its goal is to maximize the amount of paper clips available, there is no set limit for production. Using exponential gains in production efficiency, the machine will quickly use all available natural resources of the planet, including all of the carbon atoms contained in all of the human bodies in the world, and would theoretically begin to consume the cosmos in an endless quest to make paper clips.[2] Alternatively, because the AI’s goal is to create paper clips, anything that prevents it from achieving this goal is a risk factor to be mitigated. Because ASI’s run on machines, and as a result run on electricity, the loss of power is a threat to its goal. Because humans can turn the power off, humans are now a threat to its goal and should be eliminated if the ASI is to continue pursuing its goal. This instance is if humans develop AI to a point of no return. The goal of AI is for it to be smarter than humankind, and ultimately improve our way of life, but if it becomes too intelligent, it will overthrow any threat that jeopardizes the goal they were programmed to accomplish.

Counter arguments

Premises

[P1] AI is smart enough to detect enemies and then try to eliminate them.

Rejecting the premises

References

  1. https://www.pega.com/empathetic-ai?&utm_source=google&utm_medium=cpc&utm_campaign=Global_NonBrand_Broad&utm_term=%2Bai&gloc=9032952&utm_content=pcrid%7c398290046123%7cpkw%7ckwd-20061192138%7cpmt%7cb%7cpdv%7cc%7c&gclid=CjwKCAjwvOHzBRBoEiwA48i6AlKxNl-5CXAGN5Xmlwfe32SdVU0vADaNZ5-pWQjiG3Mw2RmMboJrFRoCFosQAvD_BwE&gclsrc=aw.ds
  2. https://wiki.lesswrong.com/wiki/Paperclip_maximizer
This page was last edited on Tuesday, 24 Mar 2020 at 11:08 UTC

Explore related arguments