Let's imagine that we are sitting at a desk full of papers to be thrown away. A garbage can sits 5 meters away and we are too lazy to gather the papers and walk to throw them away. So we decide to make a ball with each paper and throw it one by one into the basket. We grab the first paper and throw it, but our first shot leaves it 1 meter away from the basket. We grab the second paper and based on the result of the first shot we adjust the force with which we throw it and this time it is 50 cm from the basket. Then we go for the third and with the result of the second shot we adjust the force again and this time we hit the target.
What we did in the previous exercise was nothing more than training a network of neurons in our brain giving it inputs (force, angle and direction in which we threw the paper) and the result obtained with each shot was an exit from said network that helped us to train it. Let's start dismembering this.
The primary component for training this network of neurons is error. It is what gives us the necessary information to train the network and thus obtain the expected result. When the first paper was 1 meter away from the basket, the error was 1 meter, so we trained a network based on the results obtained, to then change the value of each neuron that is part of the network, and obtain the desired output (to make the paper land in the basket). We can say that once we train the neuron network, we have already learned to throw the papers and put them in the basket. Learning that is saved and we can then take advantage of again.
The same thing happens in artificial neural networks: for example, we can train a network to teach it to add 2 whole numbers and obtain the result of said sum at the output.
An artificial neuron (also called a perceptron) is the atomic unit of a neural network and does not represent more than a mathematical function, which receives input values and returns an output value based on the equation defined in said function.
A perceptron can also have defined input weights and a trend. The input weights can define if one input is more sensitive than another, and the trend, as its word indicates, is the value that the neuron tends to output. This will be further discussed later.
It should be noted how the error is key to learning since it is the error, and its measurement, which allows us to adjust weights and trends so that it decreases enough to consider an output as valid. If we didn't make mistakes we wouldn't learn.