Retinex and Color Constancy
Takeaway
Retinex explains how we perceive stable surface colors under changing illumination by comparing reflectance ratios across spatial scales.
The problem (before → after)
- Before: Raw pixel values confound illumination with surface reflectance.
- After: Spatial comparisons and log-domain processing separate illumination (slowly varying) from reflectance (edges/ratios).
Mental model first
Like adjusting your camera’s white balance by looking for a known gray card; the visual system uses local comparisons to infer and remove the lighting tint.
Just-in-time concepts
- Image formation: I(x) = L(x) R(x) with illumination L and reflectance R.
- Log domain: log I = log L + log R; gradients remove slow-varying L.
- Multiscale and edge-preserving filters.
First-pass solution
Compute gradients or ratios; integrate with constraints to estimate R; use gray-world/white-patch assumptions to anchor illumination.
Iterative refinement
- Intrinsic image decomposition; learning-based constancy.
- Scene priors and temporal cues.
- Color appearance models (CIECAM) beyond constancy.
Principles, not prescriptions
- Ratios and edges cancel illumination; absolute colors require anchoring.
Common pitfalls
- Over-smoothing removes reflectance edges.
- Assumptions (gray-world) fail in biased scenes.
Connections and contrasts
- See also: [/blog/perceptual-losses], intrinsic images literature.
Quick checks
- Why logs? — Turn products into sums; separate scales.
- Why gradients? — Remove slowly varying illumination.
- What if no gray? — Use statistics or learned anchors.
Further reading
- Land & McCann (source above); modern constancy surveys