Weights and Biases
Weights and biases are the fundamental learning parameters in neural networks. Weights determine how strongly inputs influence a neuron's output, while biases allow neurons to fire even when inputs are zero.
During training, these values are continuously adjusted through backpropagation to minimize the difference between predicted outputs and actual targets. This adjustment process is what enables neural networks to "learn" from data.
The combination of weights across all connections forms the network's knowledge representation. Different patterns of weights enable the network to recognize different features in the input data.