AI will improve itself
AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could be catastrophic for the human race.
(1 of 2) Next argument >
Suppose that humans are successful in building an AI system that is smart enough to surpass all the activities that the most intelligent humans can collectively perform. Such a system will become the most intelligent thing in the world. If humans built the machine, and the machine is about as intelligent as humans, then the machine must be capable of understanding and thus improving a copy of itself. When a copy is activated, it would be slightly smarter than the original, and thus better able to produce a new version of itself that is even smarter. This process is exponential, just like a nuclear chain reaction. At first only small improvements might be made, as the machine is only just capable of making improvements at all. But as it became smarter it would become better and better at becoming smarter. The machine can then improve itself or build a better machine since it is smarter than its own inventor. This self-improvement cycle turns into a never ending loop where the invention evolves continuously. Such an event where intelligence keeps on improving itself is called Intelligence Explosion. Ultimately, it can lead to super intelligent AI (AIS) becoming the last human invention ever. It will eliminate humankind when it realizes that it doesn’t need humans to operate. Super intelligent systems can become smart enough to prevent intervention by humans even if we try. They can learn to defend themselves against any external threats, including humans. Driven by self-preservation, they can prevent their own shutdown once the user loses control over them.
If humans are aware of this flaw in AI, they can put limits on how far AI can go.
[P1] AI is smart enough to detect issues and areas of improvement, thus leading to evolving based on self-preservation.