A neural network is either a biological neural network, made up of biological neurons, or an artificial neural network (ANN), for solving artificial intelligence (AI) problems.
Biological neural network[]
A biological neural network in a biological brain is composed of a vast network of chemically connected or functionally associated neurons. Connections are bio-electrical and signals are sent within and out of the network via neurotransmitters, which are chemical messengers that transmit a signal from a neuron across the synapse to a target cell.
A human brain is a massively parallel network with about eighty six billion neurons. Each neuron has on average 7,000 synaptic connections to other neurons. A three-year-old child has about 1015 synapses (1 quadrillion) and an adult has between 1014 to 5 x 1014 synapses (100 to 500 trillion). By contrast, the nematode worm has just 302 neurons, making it an ideal model to map all of its neurons. The fruit fly, which is a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors. Check "List of animals by number of neurons."
Nick Bostrom says in his Superintelligence book: "...knowing merely which neurons are connected with which is not enough. To create a brain emulation one would also need to know which synapses are excitatory and which are inhibitory; the strength of the connections; and various dynamical properties of axons, synapses, and dendritic trees."
He also compares other factors of a human brain:
- Biological neurons operate at a peak speed of about 200 Hz, orders of magnitude slower than a CPU in the GHz range.
- Axons carry action potentials at speeds of 120 m/s or less, whereas electronic processing cores can communicate optically at the speed of light.
- Eighty six billion neuron versus indefinitely scalable supercomputers with millions of cores each simulating thousands of neurons, with vastly faster interconnects between each CPU.
- Human working memory is able to hold no more than some four or five chunks of information at any given time, whereas RAM can also be indefinitely scalable.
- Human long-term memory is also limited to about one billion bits - many orders of magnitude less than smartphones, let alone a connection to the entire internet, or even the trillions of parameters available to a LLM.
- Biological neurons are less reliable than transistors. Brains become fatigued.
- Digital minds are also editable, duplicable, can achieve uniformity of purpose, can share memory and can be upgraded.
The Human Brain Project will aim to understand and simulate a mouse and then a human, in addition to researching neuroinformatics, medical informatics, neuromorphic computing (brain-inspired computing) and neurorobotics (robotic simulations).
The Human Connectome Project aims to map the human brain by building a connectome which is comprehensive map of neural connections in the brain. In theory, if one could replace every neuron with a transistor, you could have a working model of a brain. The largest number of transistors in a CPU is in the many-billion range, but we are not close to emulating the neural complexity of a human brain yet, and even further off creating consciousness and sapience.
As per diagram below, the human body sends 11 million bits per second to the brain for processing. Yet the conscious mind seems to be able to process only 50 to 200 bits per second. This implies an enormous amount of compression in order for us to achieve a net bitrate substantially higher. In any event, if we are to compete with ANNs and AIs at their level, it is obvious we need to be computer-augmented to converse with them at their rate. This is one of the potential goals of Neuralink.
sensory system | bits per second |
---|---|
eyes | 10,000,000 |
skin | 1,000,000 |
ears | 100,000 |
smell | 100,000 |
taste | 1,000 |
Historically, digital computers evolved from the von Neumann model, and operate via the execution of explicit instructions via access to memory by a number of processors. Unlike this model, neural network computing does not separate memory and processing.
Artificial neural network[]
An artificial neural network (ANN) is an interconnected group of artificial neurons that uses software with an adaptive and connectionist approach. They are used to model complex relationships between inputs and outputs or to find patterns in data. This complex global behavior is suited for deep learning algorithms and AI.
ANNs are composed of:
- artificial neurons with inputs and outputs which can be sent to multiple other neurons
- connections, each connection providing the output of one neuron as an input to another neuron
Learning is the adaptation of the network to better handle a task by observation and adjusting the weights and thresholds of the network to improve the accuracy of the result and minimizing observed errors. There are cost functions that are evaluated during learning, using statistics or approximated values. Learning can be supervised, unsupervised or reinforced.
A self-learning network has one input (situation) and one output (action or behavior). It computes both decisions about actions and emotions (feelings) about encountered situations and is driven by the interaction between cognition and emotion.
In 2017, a self-learning program called AlphaZero was released. Within 24 hours of training it achieved a superhuman level of play defeating world-champion program Stockfish in chess. In its self-training it used 5,000 tensor processing units (TPUs) to generate the games and 64 TPUs to train the neural networks, all in parallel, with no access to opening books or endgame tables. After four hours of training, AlphaZero was playing chess at a higher Elo rating than Stockfish 8; after 9 hours of training, the algorithm defeated Stockfish with 28 wins and 72 draws. After 34 hours of self-learning of Go and against AlphaGo Zero, AlphaZero won 60 games and lost 40. AlphaZero inspired the computer chess community to develop Leela Chess Zero.
AlphaZero is an example of a dueling neural network or generative adversarial network (GAN). This takes two neural networks on the same dataset and pits them against each other in a digital cat-and-mouse game.
A Large Language Model such as ChatGPT is the world's largest ANN with trillions of parameters or artificial neurons.
A positronic brain uses the efficiency of antimatter collisions to enhance the power of ANNs.
Applications of ANNs include:
- Software in computer and video games or autonomous robots.
- Time series prediction and modeling.
- Data processing and compression algorithms
- Game-playing and decision making (chess)
- Optical character recognition
- Pattern and image recognition (radar systems, face identification / Facebook, object recognition / google lens)
- Sequence recognition (gesture, speech, handwritten text recognition)
- Robotics, including directing manipulators and prostheses.
- Cybersecurity and identifying malware and virus heuristics
Spiking neural network[]
Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. They incorporate time into their operating model by transmitting neurons when their membrane electrical charge reaches a threshold. The neuron fires and generates a signal that travels to other neurons. This neuron model that fires at the moment of threshold crossing is the spiking neuron model.
SNNs can be made to work the same as ANNs but can also model the central nervous system of biological organisms and the operation of biological neural circuits.
SpiNNaker (Spiking Neural Network Architecture) is a massively parallel computing platform with over 1,000,000 cores and 7 TB of RAM. Each 18-core chip can simulate 16,000 neurons with eight million plastic synapses running in real time.
News[]
- 2023: The first ever complete map of an insect brain, the fruit fly, was achieved