Support Vector Machines

Support Vector Machines (SVMs) are supervised models for classification and regression. They find the optimal boundary between classes by maximizing the margin between the boundary and the closest data points (support vectors).

Example: Imagine arranging colored balls on a table and finding the best dividing line between two colors. With kernels, SVMs can handle non-linearly separable data by mapping it into higher dimensions.

They perform well with limited data, are effective in high-dimensional spaces, and use only a subset of training points, making them memory efficient.

The kernel trick transforms complex, non-linear problems into simpler ones by mapping data into a higher-dimensional space without explicitly computing the transformation. This allows SVMs to find linear separators in the new space.

Common kernels include polynomial, RBF, and sigmoid. The trick enables efficient handling of non-linearly separable data with minimal computational overhead.