Superintelligence

A superintelligence is an artificial or biological entity that possesses intelligence far surpassing that of the brightest and most gifted human minds. It may be associated with an intelligence explosion or technological singularity.

The popularizer of superintelligence, Nick Bostrom, defines it as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The best chess program is not superintelligent because even though it is much better than humans at chess, it cannot outperform humans in other tasks.

Artificial superintelligence
Artificial intelligence is a likely path to superhuman intelligence. AI can achieve equivalence or surpass human intelligence, and can eventually completely dominate humans across arbitrary tasks. Evolutionary algorithms should be able to produce human-level AI. AI would be able to reprogram and improve itself "recursively", and could continue doing so in a rapidly increasing cycle, leading to a superintelligence known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect.

An AI that can think millions of times faster (even in parallel) than humans would have a dominant advantage in most reasoning tasks. A collective superintelligence could have separate reasoning systems, and if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any single system.

Biological superintelligence
Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.

Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Human civilization, or an aspect of it like the Internet, is coming to function like a global brain with capacities far exceeding its component agents, even though it relies heavily on artificial components.

Extinction
Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind. Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time-scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy humanity.

Bostrom defined a singleton as a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. An AI having undergone an intelligence explosion could form a singleton, as could a world government armed with mind control and social surveillance technologies. A singleton need not support a civilization, and in fact could obliterate it upon coming to power.