Resizing an image sounds trivial — make it bigger or smaller. But when you scale a 4000×3000 photo down to 400×300, you're not just removing pixels. You're making a decision about what each of those 120,000 destination pixels should look like given 12 million source pixels. That decision is an algorithm, and different algorithms make very different trade-offs.
Why Resizing Is Non-Trivial
A digital image is a fixed grid of pixels. When you resize it, you're mapping one grid onto another of different dimensions. If the grids don't align perfectly — and they almost never do — you need to invent values for pixels that fall between sample points. That process is called interpolation, and the quality, speed, and character of the result depend entirely on how you do it.
Scaling down adds another wrinkle: you're discarding information. Multiple source pixels must contribute to each destination pixel, and how you combine them affects sharpness, artifacts, and noise.
Nearest Neighbor: Fast and Blocky
The simplest algorithm: for each destination pixel, find the nearest source pixel and copy its value exactly.
destination[x][y] = source[round(x * scaleX)][round(y * scaleY)]
No averaging, no interpolation, no blending. Just "which source pixel is closest to where this destination pixel falls, and what color is that source pixel?"
Advantages: extremely fast (one memory lookup per output pixel), completely preserves the original color values, and introduces no blurring.
Disadvantages: the result looks blocky when scaling up, and shows aliasing (jagged diagonal edges, moiré patterns) when scaling down. Fine textures in the source image can disappear entirely if they happen to fall between the sampled points.
Nearest neighbor is deliberately used for pixel art, sprites, and retro-style graphics. When you scale an 8-bit sprite up 4x, you want hard blocky pixels, not a blurry smear. CSS handles this with:
image-rendering: pixelated;
Browsers use pixelated/nearest-neighbor when this property is set, so your pixel art stays crisp.
Bilinear Interpolation: Smooth but Soft
Bilinear interpolation takes the four nearest source pixels around each destination point and blends them based on distance. If a destination pixel falls 30% between two horizontal source pixels and 60% between two vertical ones, the result is a weighted average of all four neighbors.
For destination pixel at (dx, dy):
sx = dx * scaleX // fractional source x
sy = dy * scaleY // fractional source y
// Four surrounding source pixels
tl = source[floor(sx)][floor(sy)]
tr = source[ceil(sx)][floor(sy)]
bl = source[floor(sx)][ceil(sy)]
br = source[ceil(sx)][ceil(sy)]
// Blend weights
wx = sx - floor(sx) // how far right (0.0 to 1.0)
wy = sy - floor(sy) // how far down
result = tl*(1-wx)*(1-wy) + tr*wx*(1-wy)
+ bl*(1-wx)*wy + br*wx*wy
Advantages: smooth, continuous results with no blockiness. Fast — only 4 source samples per output pixel, and the math is simple.
Disadvantages: introduces blur. High-frequency detail (sharp edges, fine lines) gets softened because you're averaging surrounding pixels. Upscaled images look smooth but slightly mushy.
Bilinear is the default in most HTML <canvas> operations and CSS image scaling when image-rendering isn't set. It's a good general-purpose choice when speed matters and you can tolerate slight softening.
Bicubic: Sharper at the Cost of Complexity
Bicubic interpolation extends bilinear's idea to use 16 surrounding pixels (a 4×4 neighborhood) and fits a cubic polynomial through them rather than a linear one. The cubic polynomial better captures local curvature, which means it can preserve edge sharpness better and reduce the soft-blur effect.
For each destination pixel:
Sample 16 surrounding source pixels (4x4 grid)
Fit cubic polynomial in x direction for each of 4 rows
Fit cubic polynomial in y direction across results
Sample the combined surface at the target position
Advantages: noticeably sharper than bilinear at the same scale factor, especially for photographs. This is the default in Photoshop's "Bicubic" resample mode and the baseline for many professional tools.
Disadvantages: slightly slower than bilinear (16 samples vs 4). Can introduce mild ringing — faint halos or dark edges along high-contrast boundaries — because cubic polynomials can overshoot. Photoshop addresses this with "Bicubic Sharper" (tuned for downsampling) and "Bicubic Smoother" (tuned for upsampling) variants.
Lanczos: High Quality, Higher Cost
Lanczos resampling (pronounced LAHN-tsosh, after Hungarian mathematician Cornelius Lanczos) uses a larger neighborhood — typically 6×6 or 8×8 source pixels — and weights them with a windowed sinc function. The sinc function in signal processing is the theoretically optimal way to reconstruct a continuous signal from discrete samples, and Lanczos approximates it within a finite window.
The Lanczos kernel for a 3-lobe version:
L(x) = sinc(x) * sinc(x/3) for |x| < 3
L(x) = 0 otherwise
where sinc(x) = sin(πx) / (πx) for x ≠ 0
Advantages: produces the sharpest, highest-quality results for downscaling photographs. Preserves fine texture detail that bilinear and bicubic soften away. The algorithm of choice for professional photo processing, video encoding, and print preparation.
Disadvantages: slower, proportional to the kernel radius squared (36–64 source samples per output pixel instead of 4 or 16). More pronounced ringing artifacts around hard edges compared to bicubic. Not ideal for upscaling for the same reason.
The sharp library (used in UtilityKit's image tools) defaults to Lanczos for downsampling:
await sharp('input.jpg')
.resize(800, 600, {
fit: 'inside',
kernel: sharp.kernel.lanczos3 // explicit Lanczos
})
.toFile('output.jpg');
Supersampling: Shrink Before You Shrink
For very aggressive downscales (say, a 10× reduction), even Lanczos can struggle with aliasing. The solution is supersampling: you first apply a blur (low-pass filter) to the source image to remove detail that can't be represented at the target resolution, then downsample.
This is what the image-rendering: auto default in browsers is supposed to do, though browser implementations vary. It's also why sharp and ImageMagick let you apply a Gaussian blur before resize when reducing dramatically.
Resampling Artifacts: What Goes Wrong
Aliasing appears as jagged diagonal lines or moiré patterns. It happens when high-frequency detail (sharp diagonals, fine grids) is present in the source but can't be represented at the lower target resolution. The fix is a pre-blur or using a better algorithm.
Ringing (Gibbs phenomenon) shows up as faint halos around sharp edges — a light halo outside a dark edge, or vice versa. It's the artifact of cubic and sinc-based kernels overshooting at discontinuities. Slightly softening the kernel (increasing the window size, or using a different windowing function) reduces it.
Blur is the bilinear artifact — the averaging of surrounding pixels washes out fine detail. Unavoidable in bilinear; reduced but not eliminated in bicubic; minimal in Lanczos.
Practical Advice for Web Images
For serving images on the web, a few rules cover most cases:
- Photos being downsized: Lanczos (sharp default) gives the best quality. Bilinear is fine if server performance is a bottleneck.
- Icons and pixel art being scaled up: nearest neighbor to preserve hard edges.
- Responsive images: let the browser handle
srcset— provide the right size rather than relying on CSS scaling. - CSS scaling at runtime: for small scaling ratios (< 2×), bilinear (the browser default) is fine. For larger ratios, consider generating a properly sized asset instead.
For more on how image data is fundamentally structured and compressed, see How Image Compression Works. And for an overview of which format to choose for your use case, Image Formats Explained covers JPEG, PNG, WebP, and AVIF.
The MDN guide on `image-rendering` documents the CSS property that controls browser resampling if you need to override the default per-element.
Use Image Resize to resize photos server-side via sharp with Lanczos quality, and Image Compressor to reduce the output file size before deploying. Getting both steps right is the difference between fast pages and ones that feel slow despite being "optimized."