A superintelligence is an artificial or biological entity that possesses intelligence far surpassing that of the brightest and most gifted human minds. It may be associated with an intelligence explosion or technological singularity.
The popularizer of superintelligence, Nick Bostrom, defines it as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The best chess program is not superintelligent because even though it is much better than humans at chess, it cannot outperform humans in other tasks.
Artificial superintelligence[]
Artificial intelligence is a likely path to superhuman intelligence. AI can achieve equivalence or surpass human intelligence, and can eventually completely dominate humans across arbitrary tasks. Evolutionary algorithms should be able to produce human-level AI. AI would be able to reprogram and improve itself "recursively", and could continue doing so in a rapidly increasing cycle, leading to a superintelligence known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect.
An AI that can think millions of times faster (even in parallel) than humans would have a dominant advantage in most reasoning tasks. A collective superintelligence could have separate reasoning systems, and if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any single system.
Super intelligences are more intelligent than AIs by multiple orders of magnitude, with their IQs ranging anywhere from hundreds of thousands to millions. Humans will have lost the ability to completely understand the code that these intelligences use to make decisions and solve problems. They are perceiving reality in dimensions that humans can’t comprehend, which helps them instantly detect slight patterns that are separated by thousands of miles and several decades. Even their level of consciousness is evolving at a fast pace.
Biological superintelligence[]
Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.
Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Human civilization, or an aspect of it like the Internet, is coming to function like a global brain with capacities far exceeding its component agents, even though it relies heavily on artificial components.
Extinction[]
Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind. Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time-scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy humanity.
Bostrom defined a singleton as a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. An AI having undergone an intelligence explosion could form a singleton, as could a world government armed with mind control and social surveillance technologies. A singleton need not support a civilization, and in fact could obliterate it upon coming to power.
Bostrom writes about The treacherous turn: "While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values."
Riemann hypothesis catastrophe[]
A superintelligence, given the final goal of evaluating the Riemann hypothesis (which is one the most difficult unsolved problems in maths), pursues this goal by transforming the Solar System into computronium, including the atoms in the bodies of whomever once cared about the answer. It will not stop this process of digitalization until it has reached its goal.
Mitigating the Existential Risk[]
To mitigate risks, certain capability control or motivation methods are proposed:
- Boxing - placing the superintelligence in an environment in which it is unable to cause harm
- Incentives - give it strongly convergent instrumental reasons not to engage in harmful behavior
- Stunting - limiting the internal capacities
- Tripwires - mechanisms to automatically detect and react to various kinds of containment failure or attempted transgression
Motivation methods:
- Direct : Three laws of robotics or a similar set of rules
- Domesticity - build the system so that it would have modest, non-ambitious goals
- Augmentation - accept its benevolent cognitive powers that had made it superintelligent, while ensuring that the motivation system does not get corrupted