Vanity Metrics
Declaring experiment success based solely on metrics that do not reflect actual user value — page views, clicks, session duration, impressions — without verifying that user outcomes improved.
$ prime install @community/anti-pattern-vanity-metrics Projection
Always in _index.xml · the agent never has to ask for this.
VanityMetrics [anti-pattern] v1.0.0
Declaring experiment success based solely on metrics that do not reflect actual user value — page views, clicks, session duration, impressions — without verifying that user outcomes improved.
Loaded when retrieval picks the atom as adjacent / supporting.
VanityMetrics [anti-pattern] v1.0.0
Declaring experiment success based solely on metrics that do not reflect actual user value — page views, clicks, session duration, impressions — without verifying that user outcomes improved.
What It Looks Like
An experiment is declared successful because page views, clicks, or impressions increased, without checking whether task completion, error rate, or retention improved.
Why People Do It
Vanity metrics are easy to instrument, trend upward with traffic, and make results look positive. They satisfy stakeholder requests for 'data' with minimal analytical cost.
Consequence
Teams ship changes that improve surface numbers while user success, retention, or satisfaction decline. Engineering effort is misdirected toward optimizing metrics that don't reflect user outcomes.
Use Instead
Require at least one user-value metric (task completion rate, error rate on key flows, 30-day retention, or activation metric) as a primary success criterion. Vanity metrics may be reported as context but never as the primary pass/fail.
Loaded when retrieval picks the atom as a focal / direct hit.
VanityMetrics [anti-pattern] v1.0.0
Declaring experiment success based solely on metrics that do not reflect actual user value — page views, clicks, session duration, impressions — without verifying that user outcomes improved.
What It Looks Like
An experiment is declared successful because page views, clicks, or impressions increased, without checking whether task completion, error rate, or retention improved.
Why People Do It
Vanity metrics are easy to instrument, trend upward with traffic, and make results look positive. They satisfy stakeholder requests for 'data' with minimal analytical cost.
Consequence
Teams ship changes that improve surface numbers while user success, retention, or satisfaction decline. Engineering effort is misdirected toward optimizing metrics that don't reflect user outcomes.
Use Instead
Require at least one user-value metric (task completion rate, error rate on key flows, 30-day retention, or activation metric) as a primary success criterion. Vanity metrics may be reported as context but never as the primary pass/fail.
Examples
- Adding a modal that forces a click — click count improves 40%, but time-to-task-completion worsens.
- Redesigning nav to add more items — page views per session increase because users are more lost.
- Session duration 'improvement' after removing the fast-path shortcut — users spend more time struggling.
What It Looks Like
An experiment is declared successful because page views, clicks, or impressions increased, without checking whether task completion, error rate, or retention improved.
Why People Do It
Vanity metrics are easy to instrument, trend upward with traffic, and make results look positive. They satisfy stakeholder requests for 'data' with minimal analytical cost.
Consequence
Teams ship changes that improve surface numbers while user success, retention, or satisfaction decline. Engineering effort is misdirected toward optimizing metrics that don't reflect user outcomes.
Use Instead
Require at least one user-value metric (task completion rate, error rate on key flows, 30-day retention, or activation metric) as a primary success criterion. Vanity metrics may be reported as context but never as the primary pass/fail.
Source
prime-system/examples/frontend-design/primes/compiled/@community/anti-pattern-vanity-metrics/atom.yaml