Workflow

Approve visual diffs: code review for UI changes

Most teams fail at visual testing not because they can't detect changes, but because they can't decide. Build an approval workflow that scales.

Why approval becomes a bottleneck

Visual testing tools are good at detecting changes. The hard part is deciding what to do about them—and that's a human workflow problem, not a tooling problem.

Too many diffs

Hundreds of changes per PR overwhelm reviewers. Everything gets approved or nothing does.

Unclear ownership

Nobody knows who should approve which changes. Diffs sit unreviewed or get rubber-stamped.

Noisy output

Environmental variance drowns real changes. Reviewers learn to ignore visual tests entirely.

Missing context

Reviewers see pixels changed but don't know why. Without intent, approval is guesswork.

When approval is painful, teams find workarounds: bulk approvals, skipped reviews, disabled tests. The testing provides no value.

What good approval looks like

An effective approval workflow is sustainable, informative, and fast enough to fit within normal PR review.

Small diff sets

Each PR surfaces a handful of intentional visual changes, not hundreds of incidental ones.

Clear ownership

For each visual test, someone specific knows they're responsible for reviewing changes.

Intent documentation

Visual changes come with context: why this changed, what it should look like, who approved the design.

Fast turnaround

Visual review happens within the PR workflow, not as an afterthought that blocks merges.

Notice that "perfect pixel accuracy" isn't on this list. Good approval is about confidence and intent, not obsessive precision.

Split approvals by scope

Not all visual changes deserve the same level of scrutiny. Match approval rigor to impact:

Design system components

High bar: designer approval required. These are shared contracts that affect the entire product.

Product feature pages

Medium bar: engineering lead approval with designer awareness. Changes should match specs.

Experiments and MVPs

Lower bar: team lead approval, possibly reporting-only mode. Rapid iteration expected.

This tiered approach prevents approval overhead from slowing down experimental work while maintaining rigor where it matters.

Designer-in-the-loop options

For design system and brand-critical UI, engineering approval isn't enough. Engineers can verify that changes are intentional—designers verify they're correct.

See designer-approved visual testing for patterns that bring design context into the review loop without creating bureaucratic overhead.

Policy: when to block merges

Blocking merges on visual approval is powerful but requires confidence. Build up to it:

  • Start with reporting: Surface diffs without blocking. Build team familiarity.
  • Block critical paths first: Checkout, design system components, navigation.
  • Expand gradually: Add blocking as tests prove stable and valuable.
  • Always have escape hatches: Urgent fixes shouldn't be held hostage to visual review.

For CI integration patterns, see visual testing in CI.

Quick checklist

  • Fix environmental noise before establishing approval workflow
  • Define ownership per test area (who approves what)
  • Require intent documentation with visual changes
  • Set approval SLAs (review within X hours of PR)
  • Build escape hatches for urgent fixes
  • Start with reporting mode, escalate to blocking
  • Review and refine workflow quarterly

Related guides

Frequently Asked Questions

Who should approve visual diffs?
Someone who understands the visual intent—often a designer for UI components or a tech lead for feature pages. The key is that the approver has context to judge whether changes are correct, not just intentional.
Should designers approve every UI change?
For design system components, yes. For product pages, it depends on impact and velocity needs. Consider tiered approaches where designers approve foundational changes while engineers handle routine updates.
How do we avoid approval fatigue?
Reduce noise first—fix flakiness, limit test scope, improve environmental consistency. When diffs are meaningful and manageable, approval becomes sustainable. Batch approvals across too many diffs signal a process problem.
How do we document intent for UI changes?
Link visual changes to design specs, issues, or Figma files. Include before/after context in PRs. The goal is that any reviewer can understand why this changed and whether it should have.
How do teams handle experiment-driven UI changes?
Either exclude experiments from visual testing, use separate baseline sets per experiment, or run in reporting-only mode for experimental areas. Don't let temporary variations pollute your core baselines.
What's the best way to handle expected diffs?
If a diff is expected, it should come with intent documentation and be approved explicitly. 'Expected' doesn't mean 'auto-approve'—it means the reviewer can quickly confirm the change matches expectations.
How do we handle urgent fixes that need visual approval?
Build escape hatches: designated individuals who can approve urgently, or async approval where changes merge but get reviewed within a time window. Never let the approval process block critical fixes.
Should visual approval block merge?
For stable, meaningful tests: yes, for critical paths. For noisy or experimental areas: no, use reporting mode. Build up to blocking as you prove reliability and value.

Join the waitlist for streamlined visual approval

Get early access