Card Sort Method
Card sorting is a participatory IA research method in which 15–30 participants group labelled content cards into categories — open card sorts (participants name the categories) reveal mental models for greenfield IA, whi…
$ prime install @community/fact-card-sort-method Projection
Always in _index.xml · the agent never has to ask for this.
CardSortMethod [fact] v1.0.0
Card sorting is the standard IA validation method: participants group labelled cards (representing content / features) into categories that make sense to them, revealing the user's mental model rather than the team's internal taxonomy.
Card sorting is a participatory IA research method in which 15–30 participants group labelled content cards into categories — open card sorts (participants name the categories) reveal mental models for greenfield IA, while closed card sorts (categories pre-supplied) validate or refine an existing taxonomy.
Loaded when retrieval picks the atom as adjacent / supporting.
CardSortMethod [fact] v1.0.0
Card sorting is the standard IA validation method: participants group labelled cards (representing content / features) into categories that make sense to them, revealing the user's mental model rather than the team's internal taxonomy.
Card sorting is a participatory IA research method in which 15–30 participants group labelled content cards into categories — open card sorts (participants name the categories) reveal mental models for greenfield IA, while closed card sorts (categories pre-supplied) validate or refine an existing taxonomy.
Confidence
strong
Applies To
- navigation IA design (sidebar, top nav, mega menu)
- settings / preferences taxonomy
- content categorisation in CMS / docs
- feature naming and grouping
Quantitative
- Sample Size Rule Of Thumb: 15 participants for ~95% concept-coverage stability (Spencer / Tullis & Wood)
- Open Vs Closed: open for discovery (greenfield); closed for validation (existing IA)
- Typical Card Count: 30–60 cards per session
Counter Conditions
- Card sorts reveal categorisation, not navigation flow — pair with tree testing or first-click testing.
- Domain-expert participants will produce different mental models from novices — sample to your real audience.
- Card sort findings degrade quickly when the underlying content set changes — re-run after major content additions.
Loaded when retrieval picks the atom as a focal / direct hit.
CardSortMethod [fact] v1.0.0
Card sorting is the standard IA validation method: participants group labelled cards (representing content / features) into categories that make sense to them, revealing the user's mental model rather than the team's internal taxonomy.
Card sorting is a participatory IA research method in which 15–30 participants group labelled content cards into categories — open card sorts (participants name the categories) reveal mental models for greenfield IA, while closed card sorts (categories pre-supplied) validate or refine an existing taxonomy.
Confidence
strong
Applies To
- navigation IA design (sidebar, top nav, mega menu)
- settings / preferences taxonomy
- content categorisation in CMS / docs
- feature naming and grouping
Quantitative
- Sample Size Rule Of Thumb: 15 participants for ~95% concept-coverage stability (Spencer / Tullis & Wood)
- Open Vs Closed: open for discovery (greenfield); closed for validation (existing IA)
- Typical Card Count: 30–60 cards per session
Counter Conditions
- Card sorts reveal categorisation, not navigation flow — pair with tree testing or first-click testing.
- Domain-expert participants will produce different mental models from novices — sample to your real audience.
- Card sort findings degrade quickly when the underlying content set changes — re-run after major content additions.
Sources
Confidence
strong
Source
- Donna Spencer, 'Card Sorting: Designing Usable Categories' (Rosenfeld Media, 2009)
- Nielsen Norman Group, 'Card Sorting: Uncover Users' Mental Models for Better Information Architecture' (NN/g, 2022)
- Optimal Workshop, 'Treejack' / 'OptimalSort' tooling docs
Applies To
- navigation IA design (sidebar, top nav, mega menu)
- settings / preferences taxonomy
- content categorisation in CMS / docs
- feature naming and grouping
Quantitative
- Sample Size Rule Of Thumb: 15 participants for ~95% concept-coverage stability (Spencer / Tullis & Wood)
- Open Vs Closed: open for discovery (greenfield); closed for validation (existing IA)
- Typical Card Count: 30–60 cards per session
Counter Conditions
- Card sorts reveal categorisation, not navigation flow — pair with tree testing or first-click testing.
- Domain-expert participants will produce different mental models from novices — sample to your real audience.
- Card sort findings degrade quickly when the underlying content set changes — re-run after major content additions.
Source
prime-system/examples/frontend-design/primes/compiled/@community/fact-card-sort-method/atom.yaml