Takeaway

Retinex explains how we perceive stable surface colors under changing illumination by comparing reflectance ratios across spatial scales.

The problem (before → after)

  • Before: Raw pixel values confound illumination with surface reflectance.
  • After: Spatial comparisons and log-domain processing separate illumination (slowly varying) from reflectance (edges/ratios).

Mental model first

Like adjusting your camera’s white balance by looking for a known gray card; the visual system uses local comparisons to infer and remove the lighting tint.

Just-in-time concepts

  • Image formation: I(x) = L(x) R(x) with illumination L and reflectance R.
  • Log domain: log I = log L + log R; gradients remove slow-varying L.
  • Multiscale and edge-preserving filters.

First-pass solution

Compute gradients or ratios; integrate with constraints to estimate R; use gray-world/white-patch assumptions to anchor illumination.

Iterative refinement

  1. Intrinsic image decomposition; learning-based constancy.
  2. Scene priors and temporal cues.
  3. Color appearance models (CIECAM) beyond constancy.

Principles, not prescriptions

  • Ratios and edges cancel illumination; absolute colors require anchoring.

Common pitfalls

  • Over-smoothing removes reflectance edges.
  • Assumptions (gray-world) fail in biased scenes.

Connections and contrasts

  • See also: [/blog/perceptual-losses], intrinsic images literature.

Quick checks

  1. Why logs? — Turn products into sums; separate scales.
  2. Why gradients? — Remove slowly varying illumination.
  3. What if no gray? — Use statistics or learned anchors.

Further reading

  • Land & McCann (source above); modern constancy surveys