Using llms.txt With MCP: Turn Your Docs Into an AI Knowledge Base
Key Takeaways
- Model Context Protocol (MCP) lets AI assistants like Claude pull live knowledge from external sources — including your llms.txt file.
- Connecting an llms-full.txt file to an MCP server gives developers instant, accurate answers about your product without leaving their editor.
- Setup takes under 10 minutes using
npx— no servers to manage.- Any site with a public
llms-full.txtcan be connected as an MCP knowledge source today.
The Model Context Protocol (MCP), introduced by Anthropic in late 2024, has quietly become one of the most practical advances in developer tooling. According to the MCP GitHub repository, the protocol reached over 1,000 community-built servers within its first six months (Anthropic MCP, 2025). One of the most useful patterns emerging from this ecosystem: using your llms-full.txt file as a live knowledge source that AI assistants can query in real time.
What Is Model Context Protocol (MCP)?
MCP is an open protocol that lets AI assistants connect to external tools and data sources in a standardized way. Think of it like a USB-C standard for AI integrations — instead of every tool building custom connectors, MCP provides a single interface that any compliant AI can use.
With MCP, an AI assistant like Claude (inside Claude Desktop or Cursor) can:
- Read files from your filesystem
- Query databases
- Call external APIs
- Fetch and search documentation from a URL — which is where llms.txt comes in
Why llms-full.txt Is Perfect for MCP
Your llms-full.txt file is a single, Markdown-formatted document containing the full text of your most important pages. It’s designed to be machine-readable and self-contained — exactly what an MCP server needs to answer questions about your product.
When you connect an llms-full.txt to an MCP server, you’re essentially giving an AI assistant a curated, always-up-to-date knowledge base about your product. No embeddings pipeline, no vector database, no RAG infrastructure — just a URL and a few lines of config.
The key difference between using llms.txt for SEO and using it with MCP:
| Use Case | File | How it’s used |
|---|---|---|
| SEO / AI discovery | /llms.txt | Crawled passively by AI systems |
| MCP integration | /llms-full.txt | Fetched actively by AI assistants on demand |
Both files are worth having. The /llms.txt is the index; /llms-full.txt is the full content.
Setting Up an MCP Server With Your llms-full.txt
Let’s look at a real example. The Taiga UI component library exposes their full documentation as an llms-full.txt and provides an MCP server that any developer can connect to:
{
"mcpServers": {
"taiga-ui": {
"command": "npx",
"args": [
"@taiga-ui/mcp@latest",
"--source-url=https://taiga-ui.dev/llms-full.txt"
]
}
}
}
This config tells Claude (or any MCP-compatible AI) to spin up a local MCP server via npx that fetches and searches the Taiga UI docs. A developer can then ask questions like “how do I use the TuiInput component?” and get accurate, sourced answers directly from the library’s own documentation.
You can connect to a /next version of the docs the same way by swapping the URL:
{
"mcpServers": {
"taiga-ui-next": {
"command": "npx",
"args": [
"@taiga-ui/mcp@latest",
"--source-url=https://taiga-ui.dev/next/llms-full.txt"
]
}
}
}
How to Add This Config to Your AI Tool
Claude Desktop
Open your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the mcpServers block. If the file already has other servers, just add your new entry to the existing object:
{
"mcpServers": {
"my-product-docs": {
"command": "npx",
"args": [
"@taiga-ui/mcp@latest",
"--source-url=https://yourdomain.com/llms-full.txt"
]
}
}
}
Restart Claude Desktop. You’ll see a tools icon appear in the chat input — that confirms MCP is connected.
Cursor
In Cursor, go to Settings > MCP and add the server config. Cursor supports the same MCP JSON format as Claude Desktop.
VS Code (with Copilot or Claude extension)
Add the config to your workspace’s .vscode/mcp.json file:
{
"servers": {
"my-product-docs": {
"command": "npx",
"args": [
"@taiga-ui/mcp@latest",
"--source-url=https://yourdomain.com/llms-full.txt"
]
}
}
}
Building Your Own MCP Server for Your llms-full.txt
Don’t want to rely on a third-party MCP package? You can build a minimal one. The pattern is simple: fetch the llms-full.txt URL, split it into sections by Markdown headings, and expose a search tool that filters sections by keyword.
Here’s the minimal structure using the official MCP TypeScript SDK:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({ name: "my-docs", version: "1.0.0" });
server.tool(
"search_docs",
{ query: z.string() },
async ({ query }) => {
const res = await fetch("https://yourdomain.com/llms-full.txt");
const text = await res.text();
const sections = text.split(/^##+ /m);
const matches = sections.filter(s =>
s.toLowerCase().includes(query.toLowerCase())
);
return { content: [{ type: "text", text: matches.join("\n\n---\n\n") }] };
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
This is a production-ready starting point. Real implementations add caching (the file rarely changes), pagination for large files, and fuzzy matching.
What This Enables for Your Users
When your documentation is connected via MCP, developers using AI coding assistants get:
- Accurate answers grounded in your actual docs — no hallucinated API methods
- Real-time updates — every time they ask, the MCP server fetches the latest version of your llms-full.txt
- Zero-friction discovery — they stay in their editor; no browser tab switching needed
- Context-aware code generation — the AI can generate code snippets using the correct syntax from your library
From what we’ve seen testing this pattern with multiple libraries, the biggest win is eliminating “hallucinated” API answers. When an AI is grounded in your actual llms-full.txt, it stops inventing method signatures that don’t exist.
Does Your Site Need an llms-full.txt for This to Work?
Yes. The MCP pattern requires a file with the full content of your docs — not just the index. If you only have /llms.txt, you’ll get navigation but not enough detail for an AI to answer specific questions.
To generate an llms-full.txt for your site automatically, use LLMGenerator. It crawls your site and produces both files in one step.
FAQ
Does my llms-full.txt need to be publicly accessible? Yes, if you’re using a URL-based MCP server. For private docs, you can pass a local file path instead of a URL in the MCP args.
How large can my llms-full.txt be? The practical limit depends on the AI’s context window. Files under 200KB work reliably with most models. For larger documentation sets, consider splitting by section and running multiple MCP servers.
Can I use this pattern with any MCP-compatible AI? Yes. The MCP protocol is open and model-agnostic. Any AI tool that supports MCP (Claude, Cursor, Zed, and a growing list of others) can use the same config.
Do I need to republish my llms-full.txt when docs change? Ideally, yes. Most MCP servers fetch the URL on each request, so your docs just need to be at a stable URL. If you automate regeneration on deploy, your MCP integration stays current automatically.
The combination of llms-full.txt and MCP is one of the most practical ways to make your documentation genuinely useful to developers in 2026. It’s low-maintenance, requires no AI infrastructure on your end, and delivers real value to anyone using an AI coding assistant with your product.
Start by generating your llms-full.txt with LLMGenerator, then add the MCP config to your Claude Desktop or Cursor setup. The whole thing takes under 15 minutes.