Neural networks store knowledge in weights —numerical values that connect neurons and determine how information flows through the network.

Think of these weights as the "memory" of the network. Just as your brain forms connections between neurons when you learn something new, a neural network adjusts its weights during training. When recognizing images, some weights might become sensitive to edges, others to textures, and some to specific shapes like cat ears or human faces.

The combination of millions of these weights creates a complex "knowledge web" that transforms raw data (like pixel values) into meaningful predictions (like "this is a cat").

Neural networks encode knowledge through distributed representations across layers of weighted connections. Unlike traditional programs with explicit rules, neural networks store information implicitly in their parameter space.

Each weight represents a small piece of the overall knowledge, and it's the pattern of weights working together that creates intelligence. For example: - In image recognition, early layers might store edge detectors, middle layers might recognize textures and shapes, while deeper layers represent complex concepts like "whiskers" or "tail". - In language models, weights encode grammatical rules, word associations, and even factual knowledge without explicitly programming these rules.