AI Setup Guide
This setup is designed to help AI agents produce better code with your libraries.
The main idea is simple:
- the AI will never have perfectly fresh knowledge about your libraries, frameworks, or setup
- if we do nothing, the AI will guess from training data
- guessing is exactly what causes wrong APIs, outdated patterns, bad setup, and code that does not match your project
So instead of relying only on the model's memory, this setup gives the AI a system:
- docs for the human-facing source of truth
- MCP for structured agent-facing package knowledge
- skills for activation, routing, and safe defaults
llms.txtfor an AI-readable map of the documentation site
This guide explains what each piece is, why it matters, and how to use it.
Why This Setup Matters
Large language models do not automatically know the current version of your stack.
Even when the AI "knows" a library, it often knows:
- an older version
- a generic ecosystem pattern
- a similar library, but not yours
- a simplified mental model that misses project-specific setup
This is exactly why fresh documentation matters.
Next.js is a strong example. In the official Next.js guide for AI coding agents, the default generated AGENTS.md tells the agent:
"Your training data is outdated — the docs are the source of truth."
Next also explains that create-next-app can generate AGENTS.md automatically and that the agent should read the version-matched docs bundled inside the installed package.
Source:
That principle applies to any serious library, not only Next:
- the AI will not reliably know your latest package behavior
- the AI will not reliably know your exact setup requirements
- the AI will not reliably know your wrapper typing, conventions, or migration rules
So we need to actively help the AI.
Skills
What is a skill?
A skill is a local instruction package for the agent.
In this setup, a skill usually lives in a folder like:
skills/codeleap-query/
skills/codeleap-form/
skills/codeleap-store/
skills/codeleap-portals/
and contains:
SKILL.md- optional
scripts/ - optional
references/ - optional
evals/
Why skills matter
A skill does not try to be the whole package documentation.
Its job is to help the agent answer questions like:
- should this request use this package at all?
- what kind of user prompt should activate this package?
- which MCP query should I run first?
- what are the dangerous coding-time mistakes to avoid?
Without a skill, the AI may:
- fail to realize the package should be used
- choose the wrong package for the job
- skip the project's intended workflow
- import the wrong things or use internal paths
How to use skills
A good skill should stay thin and practical.
In this setup, skills should usually contain:
- activation rules
- intent mapping
- routing defaults
- coding-time anti-patterns
- instructions to prefer MCP scripts first
A good skill should not become a giant duplicate of the docs.
The pattern we use is:
- docs = full explanation
- MCP = structured package knowledge
- skill = routing layer
MCP
What is MCP?
MCP stands for Model Context Protocol.
In this setup, the MCP server gives the AI structured access to package knowledge.
Instead of forcing the model to "remember" everything, MCP lets the agent ask focused questions such as:
- what is this package for?
- how should it be set up?
- when should I use one abstraction versus another?
- what does this symbol do?
- which example matches this task?
Why MCP matters
MCP is where package knowledge becomes queryable.
That matters because AI coding work is usually not:
- "read all docs"
It is:
- "I need the setup for this package"
- "I need the right abstraction for this prompt"
- "I need the type symbol that controls this behavior"
- "I need an example close to this use case"
MCP makes that practical.
Instead of loading an entire documentation tree into context, the agent can ask targeted questions and get:
- overview
- setup
- focused topics
- runtime symbol guidance
- type symbol guidance
- examples
- search results across all of the above
How to use MCP
In this setup, MCP should be the agent's first stop for package-specific guidance.
Typical MCP usage looks like:
get_package_overviewget_setup_guideget_docsget_api_symbolget_type_symbolsearch_package_knowledgesearch_examples
And the rule of thumb is:
- if the prompt is vague, start with search
- if the prompt is about setup, use setup guide
- if the prompt is about one concept, use docs
- if the prompt is about one API, use symbol/type lookup
- if the prompt is "show me a pattern", use examples
MCP should be treated as a structured derivative of the docs site, not a second unrelated source of truth.
llms.txt
What is llms.txt?
llms.txt is an emerging convention for publishing an AI-readable summary of a website at the root path /llms.txt.
The proposal describes it as a way to provide information that helps LLMs use a website at inference time.
Source:
Why llms.txt matters
A normal docs site is built for humans.
That means it often contains:
- sidebars
- navigation
- repeated layout content
- many pages that are too large in aggregate for a useful context window
llms.txt helps by giving AI systems a curated map of:
- what the site is
- how to understand it
- which pages are most important
That makes it easier for external tools and agents to navigate your docs without guessing.
What llms.txt is good for
llms.txt is especially useful for:
- public documentation sites
- package docs
- internal docs portals exposed to AI-aware tooling
- reducing ambiguity about where the important documentation lives
It does not replace:
- docs site content
- MCP
- project skills
It complements them.
The relationship is:
- docs site = canonical human explanation
llms.txt= AI-readable map of that site
Setup
This setup is meant to be practical.
1. Install the skills
Use the skills npm package.
Current project skills:
| Skill | Package focus |
|---|---|
codeleap-query | @codeleap/query |
codeleap-form | @codeleap/form |
codeleap-store | @codeleap/store |
codeleap-portals | @codeleap/portals |
codeleap-modals | @codeleap/modals |
codeleap-styles | @codeleap/styles |
codeleap-mobile | @codeleap/mobile |
codeleap-web | @codeleap/web |
Basic pattern:
npx skills add <source> --skill <name>
If you want the same skill installed for both Claude Code and Codex, target both agents explicitly:
npx skills add <source> --skill <name> -a claude-code -a codex
Examples for this repository:
npx skills add https://github.com/codeleap-uk/internal-libs-monorepo --skill codeleap-query -a claude-code -a codex
npx skills add https://github.com/codeleap-uk/internal-libs-monorepo --skill codeleap-form -a claude-code -a codex
npx skills add https://github.com/codeleap-uk/internal-libs-monorepo --skill codeleap-store -a claude-code -a codex
npx skills add https://github.com/codeleap-uk/internal-libs-monorepo --skill codeleap-portals -a claude-code -a codex
If you want the install to be global instead of project-level, add -g:
npx skills add https://github.com/codeleap-uk/internal-libs-monorepo --skill codeleap-query -a claude-code -a codex -g -y
If you want all skills from the repository for both Claude Code and Codex:
npx skills add https://github.com/codeleap-uk/internal-libs-monorepo --skill '*' -a claude-code -a codex -g -y
That is the normal entrypoint for installing project skills such as:
codeleap-querycodeleap-formcodeleap-storecodeleap-portals
The important detail is that the CLI installs to each selected agent's own skill directory. So if you want both Claude Code and Codex to use the same skill, install it for both with -a claude-code -a codex.
Once installed, the agent can activate those skills when the prompt matches the package.
2. Make sure the skill can use the MCP
You do not need a second manual setup step for this in normal usage.
The intended flow is:
- the skill activates
- the skill scripts call the MCP
- the MCP returns structured package guidance
So in practice, the skill is already the entrypoint and the MCP is the knowledge layer behind it.
3. Optionally provide llms.txt
If you want external AI tools to understand your docs site better, expose a llms.txt file for the site.
This is especially useful when:
- the docs are public
- the docs have many sections
- you want AI tools to have a curated entrypoint into the documentation
llms.txt does not replace the docs or the MCP.
It simply gives AI systems a better map of the docs site.
Even With This Setup, The AI Can Still Hallucinate
This setup improves results a lot, but it does not make the AI perfect.
Even with:
- current docs
- MCP
- skills
llms.txt
the AI can still:
- choose the wrong abstraction
- invent an API that does not exist
- misunderstand setup requirements
- mix your package with a similar library pattern
- produce code that looks plausible but is still wrong
Because of that, it is important to keep a feedback loop.
When the AI gets something wrong, document it.
The most useful format is:
- the prompt
- the answer the AI gave
- the project/package context
- the correct answer
That kind of record helps you:
- turn repeated failures into evals
- improve the skill routing
- improve MCP topics, symbols, or examples
- improve the docs where the explanation is still not clear enough
In practice, this means AI quality does not come only from one good setup.
It also comes from:
- observing mistakes
- writing them down clearly
- using those mistakes to improve the system
Practical Workaround When The AI Misses The Right Skill
If the AI is going in the wrong direction, one simple workaround is to tell it explicitly which skill to use.
For example:
"Use codeleap-query skill for this.""Use codeleap-form skill for this.""Use codeleap-store skill for this.""Use codeleap-portals skill for this."
This helps when the AI:
- did not activate the correct skill
- chose the wrong package
- started solving the task with generic ecosystem knowledge instead of project guidance
In practice, this kind of prompt is often enough to redirect the agent back to the correct package workflow.
So if the AI is hallucinating or drifting, a good first correction is:
- tell it which skill to use
- then ask again for the task