Neural network

A neural network is either a biological neural network, made up of biological neurons, or an artificial neural network (ANN), for solving artificial intelligence (AI) problems.

Biological neural network
A biological neural network in a biological brain is composed of a vast network of chemically connected or functionally associated neurons. Connections are bio-electrical and signals are sent within and out of the network via neurotransmitters, which are chemical messengers that transmit a signal from a neuron across the synapse to a target cell.

A human brain is a massively parallel network with about eighty six billion neurons. Each neuron has on average 7,000 synaptic connections to other neurons. A three-year-old child has about 1015 synapses (1 quadrillion) and an adult has between 1014 to 5 x 1014 synapses (100 to 500 trillion). By contrast, the nematode worm has just 302 neurons, making it an ideal model to map all of its neurons. The fruit fly, which is a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors.

The Human Brain Project will aim to understand and simulate a mouse and then a human, in addition to researching neuroinformatics, medical informatics, neuromorphic computing (brain-inspired computing) and neurorobotics (robotic simulations). Historically, digital computers evolved from the von Neumann model, and operate via the execution of explicit instructions via access to memory by a number of processors. Unlike this model, neural network computing does not separate memory and processing.

Artificial neural network
An artificial neural network (ANN) is an interconnected group of artificial neurons that uses software with an adaptive and connectionist approach. They are used to model complex relationships between inputs and outputs or to find patterns in data. This complex global behavior is suited for deep learning algorithms and AI.

ANNs are composed of:


 * artificial neurons with inputs and outputs which can be sent to multiple other neurons
 * connections, each connection providing the output of one neuron as an input to another neuron

Learning is the adaptation of the network to better handle a task by observation and adjusting the weights and thresholds of the network to improve the accuracy of the result and minimizing observed errors. There are cost functions that are evaluated during learning, using statistics or approximated values. Learning can be supervised, unsupervised or reinforced.

A self-learning network has one input (situation) and one output (action or behavior). It computes both decisions about actions and emotions (feelings) about encountered situations and is driven by the interaction between cognition and emotion.

In 2017, a self-learning program called AlphaZero was released. Within 24 hours of training it achieved a superhuman level of play defeating world-champion program Stockfish in chess. In its self-training it used 5,000 tensor processing units (TPUs) to generate the games and 64 TPUs to train the neural networks, all in parallel, with no access to opening books or endgame tables. After four hours of training, AlphaZero was playing chess at a higher Elo rating than Stockfish 8; after 9 hours of training, the algorithm defeated Stockfish with 28 wins and 72 draws. After 34 hours of self-learning of Go and against AlphaGo Zero, AlphaZero won 60 games and lost 40. AlphaZero inspired the computer chess community to develop Leela Chess Zero.

Applications of ANNs include:


 * Software in computer and video games or autonomous robots.
 * Time series prediction and modeling.
 * Data processing and compression algorithms
 * Game-playing and decision making (chess)
 * Optical character recognition
 * Pattern and image recognition (radar systems, face identification / Facebook, object recognition / google lens)
 * Sequence recognition (gesture, speech, handwritten text recognition)


 * Robotics, including directing manipulators and prostheses.


 * Cybersecurity and identifying malware and virus heuristics

Spiking neural network
Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. They incorporate time into their operating model by transmitting neurons when their membrane electrical charge reaches a threshold. The neuron fires and generates a signal that travels to other neurons. This neuron model that fires at the moment of threshold crossing is the spiking neuron model.

SNNs can be made to work the same as ANNs but can also model the central nervous system of biological organisms and the operation of biological neural circuits.

SpiNNaker (Spiking Neural Network Architecture) is a massively parallel computing platform with over 1,000,000 cores and 7 TB of RAM. Each 18-core chip can simulate 16,000 neurons with eight million plastic synapses running in real time.