On this page
Prior art
Eager-injection skills (SKILL.md, Custom Instructions)
The current dominant pattern: markdown files loaded in full as system prompt on every turn. Cheap to write, fast to ship, breaks at scale. See the bulk-loading problem.
Retrieval-augmented generation (RAG)
RAG addresses the bulk problem by retrieving content per-turn from a vector store. The cost is structural: vector retrieval ranks by embedded similarity, which is type-blind. A query "how do I make a pan sauce" can return a fact about pan-sauce history, an unrelated rule about deglazing temperatures, or a marketing paragraph — the embedding doesn't know which is the method the agent needs.
Skill Wiki keeps RAG's pull model but adds typed retrieval: the agent asks for kinds, the retriever ranks within kinds, contracts enforce composition.
Knowledge graphs (Neo4j, Wikidata)
Classic knowledge graphs already separate index from content and use typed edges. The gap is tooling for LLM agents: KG schemas are designed for SPARQL queries and human curation, not for an agent that needs a 30-token summary on demand or a 200-token core. Skill Wiki borrows the typed-edge idea and adds the projection layer.
Voyager-style skill retrieval
Voyager (NeurIPS 2023) used a skill library plus retrieval to grow agent capabilities incrementally. The skill units were code snippets; retrieval was vector similarity. Skill Wiki generalises the principle to any kind of knowledge — not just executable skills — and adds static type checking the LLM can be guarded by.
Comparison table
| Approach | Index always-in-ctx | Bodies on-demand | Typed edges | Compile-time check |
|---|---|---|---|---|
| SKILL.md | — | No (loaded eagerly) | — | — |
| RAG (vector) | — | Yes | — | — |
| Knowledge graph | Partially | Via SPARQL | Yes | Schema-only |
| Voyager skills | — | Vector retrieval | — | Compile (code) |
| Skill Wiki | Yes (~3 KB) | Yes, by id | Yes (14 verbs) | L1 + L3 + opt L2 |
See the long-form comparison for the quantitative numbers.