Martingales
Martingales represent fair games where, given the past, the expected future value equals the current value. Building on concepts from Markov processes, martingales add a powerful mathematical framework for analyzing processes where predictions remain stable despite randomness.
Formally, a stochastic process {Xₙ} is a martingale if E[|Xₙ|] < ∞ for all n, and E[Xₙ₊₁|X₁,...,Xₙ] = Xₙ. This conditional expectation property encapsulates the concept of fairness—your expected fortune after the next play equals your current fortune.
Martingales provide powerful theoretical tools through results like the Optional Stopping Theorem and Martingale Convergence Theorem. These results help analyze algorithms that involve random stopping times or exhibit convergent behavior despite randomness.
In machine learning, martingales provide theoretical foundations for online learning algorithms, adaptive sampling strategies, and sequential testing procedures.
Stochastic gradient descent, the workhorse of deep learning optimization, can be analyzed using martingale theory to establish convergence guarantees. Multi-armed bandit algorithms for exploration-exploitation tradeoffs also leverage martingale concentration inequalities to provide theoretical performance bounds.