Ai Slop Test
The final quality bar for any AI-generated interface is whether a viewer would immediately assume 'AI made this.' A distinctive interface makes someone ask 'how was this made?' — not 'which AI made this?'
$ prime install @anthropic-impeccable/principle-ai-slop-test Projection
Always in _index.xml · the agent never has to ask for this.
AiSlopTest [principle] v1.0.0
The final quality bar for any AI-generated interface is whether a viewer would immediately assume 'AI made this.' A distinctive interface makes someone ask 'how was this made?' — not 'which AI made this?'
Loaded when retrieval picks the atom as adjacent / supporting.
AiSlopTest [principle] v1.0.0
The final quality bar for any AI-generated interface is whether a viewer would immediately assume 'AI made this.' A distinctive interface makes someone ask 'how was this made?' — not 'which AI made this?'
Implications
- Run this mental test before shipping: if someone could mistake it for every other 2024-2025 AI output, iterate.
- The DON'T rules (no Inter, no cyan-on-dark, no gradient text, no glassmorphism) are literal fingerprints — each one is a tell.
- Diversity across generations is required: the same aesthetic commitment becomes a new fingerprint when repeated 1000 times.
- Bold and wrong is recoverable; safe and generic is already a failure.
Loaded when retrieval picks the atom as a focal / direct hit.
AiSlopTest [principle] v1.0.0
The final quality bar for any AI-generated interface is whether a viewer would immediately assume 'AI made this.' A distinctive interface makes someone ask 'how was this made?' — not 'which AI made this?'
Implications
- Run this mental test before shipping: if someone could mistake it for every other 2024-2025 AI output, iterate.
- The DON'T rules (no Inter, no cyan-on-dark, no gradient text, no glassmorphism) are literal fingerprints — each one is a tell.
- Diversity across generations is required: the same aesthetic commitment becomes a new fingerprint when repeated 1000 times.
- Bold and wrong is recoverable; safe and generic is already a failure.
Sources
Rationale
Generic visual choices — Inter, rounded cards, purple-on-dark, centered hero layouts — are the statistical average of AI training data. Models optimize for safe, inoffensive outputs that score well in RLHF but read as machine-generated at a glance. The AI Slop Test operationalizes this: show the output to someone and say 'AI made this' — if they immediately believe you, the design has failed to establish any distinctive voice. The test is binary: either the design surprises or it doesn't.
Source
prime-system/examples/frontend-design/primes/compiled/@anthropic-impeccable/principle-ai-slop-test/atom.yaml