Random Walks
Random walks model paths where each step is determined by chance, such as the journey of a drunkard taking random steps. They can be one-dimensional, two-dimensional, or in higher dimensions.
A simple random walk on integers starts at 0 and at each step moves +1 or -1 with equal probability. Despite their simplicity, random walks exhibit fascinating properties like recurrence (in dimensions 1 and 2, a random walk will eventually return to its starting point with probability 1) and transience (in dimensions 3 and higher, there's a positive probability it never returns).
The expected distance from the origin after n steps in a simple random walk is proportional to √n, not n—illustrating how randomness leads to slower exploration than directed movement. This principle, formalized in the Central Limit Theorem, explains why random walks tend to follow normal distributions at large scales.
In machine learning, random walks underpin algorithms for graph analysis, recommendation systems, MCMC sampling methods, and PageRank-like algorithms for determining node importance in networks.
Deep learning applications include graph neural networks that propagate information through graphs using random walk principles, word embedding models like node2vec that capture semantic relationships through walks on word co-occurrence networks, and exploration strategies in reinforcement learning that balance random walks (exploration) with directed movement (exploitation).