How does a neural network work: weights of neurons
One of the most important elements of a neural network is the ability to learn. A neural network is a complex adaptive system, which means that it can change its internal structure based on data. This is achieved by adjusting weights. Each connection between neurons has a weight, a number that controls the signal between the two neurons. This weight determines how strong the signal arrives at the target neuron through that connection, and can be positive (stimulating) or negative (inhibiting).
Ultimately, a function with weights is defined for each pair of neurons (A, B) in the network, which indicates how the firing of A influences B. If the network generates the correct output, the network does not have to adjust the weights. However, if the network generates an incorrect output, the system adjusts the weights to improve future results.
How does a neural network work: the threshold value
The threshold is also important: the signal is sent only if the aggregate of the signal is lower (or higher) than that threshold level. This is how it works: when the network is active, the neuron receives a data item – a different number – over each of the connections and multiplies it by the corresponding weight. Then it merges the resulting products, giving you a single number. Only if the number exceeds the threshold value does the neuron transfer data to the next neuron.
Text continues below the image.
How does a neural network work: the changing network
The interaction between the processors in a neural network is adaptive, so that connections between other processors in the neural network can arise, and existing connections can be strengthened, weakened or broken. This means that a neural network can learn. In this context, “learning” refers to the automatic adjustment of the parameters of the system, so that the system can generate the correct output for a certain input. Information that flows through the network affects the structure of the neural network because it changes – or learns – based on that input and output.
How does a neural network work: an example
In other words: neurons are activated via weighted connections of previously active neurons. The receiving neuron processes the signal and then sends a signal to the following neurons or not. Inputs that contribute to getting correct answers are weighted higher. For example, if nodes David, Dianne and Dakota node tell Ernie that the current input image is a photo of Brad Pitt, but node Durango says it’s Betty White and the training program confirms it’s Pitt, Ernie will reduce the weight it assigns to Increase Durango’s input and increase the weight it gives to David, Dianne and Dakota.
How does a neural network work: backpropagation
The training of the network is done by offering input with the corresponding desired output. During this controlled phase, the network compares its actual output with what it was intended to produce, that is, the desired output. The difference between the two outcomes is adjusted using backpropagation. This means that the network works backwards from output to input to adjust weights and thresholds (or functions) until the difference between the actual and the desired output is as small as possible. These are usually adjusted gradually, and a training session consists of many (thousands) of iterations.
A second step in the learning process is offering the neural network new input and checking the result that it yields. In this way it can be determined to what extent the network is “skilled” and how reliable the results are.
You can find more English articles in our blog.
Other articles you might enjoy: