Backpropagation in Nature

Prerequisites

This assumes significant familiarity with the backpropagation algorithm. If you aren’t familiar with it, please read my explanation of backpropagation.

Interpreting backpropagation errors

In any standard mathematical analysis of neural networks, the weights are everything. The nodes are just uninterpretable intermediate calculations. But the backpropagation assigns interpretable errors these nodes. If you recall, these errors are the partial derivative of the node value when treating that layer as the fixed input.

This is best expressed through a layer centric analysis. Consider each layer of the network to be a function with an input and an output: the other layers do not exist. Then the error of a node in the layer is exactly the contribution of that node.

Single layer to multi-layer backpropagation

Recurrent single layer backpropagation