Image Filters & Convolution
Explore how convolution kernels transform images. Apply different filters and see the mathematical operations in action.
Image Upload
Select Filter
Original Image
Filtered Result
How Convolution Works
Sliding Window
The kernel moves across the image pixel by pixel. At each position, it multiplies the kernel values with the corresponding pixel values and sums them up.
Normalization
The divisor normalizes the result to keep pixel values in the valid range (0-255). For blur kernels, this prevents the image from becoming too bright.
Edge Detection
Kernels like Sobel and Laplacian detect edges by finding areas with high pixel intensity changes. Positive and negative values highlight different edge directions.
CNN Foundation
Convolutional Neural Networks learn optimal kernels automatically. Understanding manual filters helps grasp how CNNs extract features from images.
Mathematical Foundation
Convolution Operation
For a 3×3 kernel K applied to an image I, the output pixel value at position (x, y) is computed as:
Output(x, y) = Σ Σ I(x+i-1, y+j-1) × K(i, j) / divisor
where i, j ∈ {0, 1, 2}Example: Gaussian Blur
The Gaussian blur kernel approximates a 2D Gaussian distribution, giving more weight to the center pixel:
┌ ┐ │ 1 2 1 │ │ 2 4 2 │ ÷ 16 │ 1 2 1 │ └ ┘
Sum of weights = 16, so we divide by 16 to maintain brightness. The center pixel has weight 4/16 = 0.25, corners have 1/16 = 0.0625.
Example: Sobel Edge Detection
Sobel kernels detect edges by computing intensity gradients. Two kernels detect horizontal and vertical edges:
Horizontal (Gx)
┌ ┐ │ -1 0 1 │ │ -2 0 2 │ │ -1 0 1 │ └ ┘
Vertical (Gy)
┌ ┐ │ -1 -2 -1 │ │ 0 0 0 │ │ 1 2 1 │ └ ┘
Gradient magnitude: G = √(Gx² + Gy²) | Gradient direction: θ = arctan(Gy / Gx)
Filter Types Explained
1. Smoothing Filters (Low-Pass)
Purpose: Reduce noise and detail by averaging neighboring pixels. Called "low-pass" because they allow low-frequency components (smooth areas) to pass while blocking high frequencies (edges, noise).
Use Cases: Preprocessing for edge detection, removing sensor noise, creating depth-of-field effects, reducing compression artifacts.
Box Blur (Simple Average)
┌ ┐ │ 1 1 1 │ │ 1 1 1 │ ÷ 9 │ 1 1 1 │ └ ┘
Equal weights - fast but less natural
Gaussian Blur
┌ ┐ │ 1 2 1 │ │ 2 4 2 │ ÷ 16 │ 1 2 1 │ └ ┘
Weighted by distance - more natural
2. Sharpening Filters (High-Pass)
Purpose: Enhance edges and fine details by amplifying high-frequency components. Works by adding a scaled version of the Laplacian (second derivative) to the original image.
Use Cases: Enhancing blurry photos, improving OCR accuracy, preparing images for printing, emphasizing textures.
Standard Sharpen
┌ ┐ │ 0 -1 0 │ │ -1 5 -1 │ ÷ 1 │ 0 -1 0 │ └ ┘
Center: 5 (original + enhancement), Neighbors: -1 (subtract surrounding blur)
Formula: Sharpened = Original + α × (Original - Blurred)
3. Edge Detection Filters
Purpose: Identify boundaries between regions by detecting rapid intensity changes. Essential for object recognition, image segmentation, and feature extraction.
Use Cases: Object detection preprocessing, lane detection in autonomous vehicles, medical image analysis, QR code scanning.
Sobel X
┌ ┐ │ -1 0 1 │ │ -2 0 2 │ │ -1 0 1 │ └ ┘
Vertical edges (left-right gradient)
Laplacian
┌ ┐ │ 0 -1 0 │ │ -1 4 -1 │ │ 0 -1 0 │ └ ┘
All directions (2nd derivative)
Prewitt
┌ ┐ │ -1 0 1 │ │ -1 0 1 │ │ -1 0 1 │ └ ┘
Similar to Sobel, equal weights
Why Sobel uses [-2, 0, 2]?
The center row has double weight because it's directly aligned with the edge direction. This provides better noise suppression while maintaining edge detection accuracy - it's essentially a combination of gradient and smoothing.
4. Emboss Filter
Purpose: Creates a 3D raised relief effect by emphasizing edges in a specific direction. Simulates light coming from one corner.
Use Cases: Artistic effects, watermarking, texture analysis, visualizing surface topology.
Emboss Kernel
┌ ┐ │ -2 -1 0 │ │ -1 1 1 │ │ 0 1 2 │ └ ┘
Light from bottom-right, shadows top-left
How it works: Negative values on one side, positive on the opposite. The difference creates the illusion of depth. Often combined with a gray offset (128) to center values.
5. Advanced Concepts
Separable Filters
Some 2D kernels can be decomposed into two 1D kernels (row and column), making them much faster to compute.
Complexity: O(N²M²) → O(2NM)
Padding Strategies
How to handle edges where the kernel extends beyond the image:
- Zero padding: Assume 0 outside image
- Replicate: Repeat edge pixels
- Reflect: Mirror the image
- Wrap: Tile the image
Kernel Size Trade-offs
Larger kernels (5×5, 7×7) provide:
- ✅ Stronger effects (more blur, better noise reduction)
- ✅ Wider context for feature detection
- ❌ Higher computational cost (O(K²) per pixel)
- ❌ More edge shrinkage
In CNNs
Deep learning networks learn kernel values through backpropagation:
- • Early layers: Edge/texture detectors (like manual filters)
- • Middle layers: Shape/pattern detectors
- • Deep layers: High-level features (faces, objects)