The perceptron is the fundamental building block of neural networks—a computational model inspired by biological neurons. Developed in the late 1950s, this simple algorithm laid the groundwork for modern deep learning.

A perceptron works by taking multiple inputs, multiplying each by a weight, summing these weighted inputs, and passing the result through an activation function to produce an output. This simple structure can perform binary classification by creating a linear decision boundary in the input space.

The power of perceptrons comes from their ability to learn from data. Through training algorithms like gradient descent, they adjust their weights to minimize errors in their predictions. Though a single perceptron can only represent linear functions (a significant limitation that was once considered a dead-end for neural networks), combining multiple perceptrons into multi-layer networks overcomes this restriction, enabling the representation of complex non-linear functions.

The modern neuron model still follows this basic structure—inputs, weights, sum, activation function—but with more sophisticated activation functions and training methods that allow for deeper networks and more complex learning tasks.