A standard for websites that work as well for AI agents as they do for humans. Four agent environments. Ten patterns. One reference implementation.
"Agent" is a category, not a species. Each environment has different auth, different discovery, different latency tolerance. Plan for each one independently — no single progressive ladder blocks the others.
navigator.modelContext.registerTool() for reads and search. Page-context tools that auto-register per URL. Tool results stream to the sidebar.
AGENTS.md at root. Programmatic signup. Bearer-token API. JSON errors with Retry-After.
POST /auth/key with username+password). Credentials-file convention. Accept: text/markdown. Magic-login-link endpoint for handoff to human.
.well-known/mcp/server-card.json (SEP-1649) or the endpoint enumeration at .well-known/mcp (SEP-1960).
setup.md URL for the human.
Ship whichever environments your users actually live in. You don't owe the browser-agent surface to a SaaS whose users all live in Claude Code.
Ordered by impact-per-hour-of-work. Start at the top. Every pattern below is implemented by at least one production site today — most by WikiHub.
.md suffixThe single highest-leverage pattern. Every page that renders HTML should also respond to Accept: text/markdown with the raw markdown source + frontmatter. Support the .md suffix fallback for clients that can't set headers. Set Vary: Accept and Link: rel=alternate.
AGENTS.md at site rootPlain markdown at /AGENTS.md that tells an agent how to use your site: how to sign up, where the API is, how to auth, how to clone content, where setup.md lives. This format has achieved actual multi-vendor adoption (Codex, Claude Code, Cursor). Don't over-engineer it — a humans-read-this file agents happen to also read.
POST /api/v1/accounts {"username": "..."} returns {user_id, username, api_key}. If you need abuse prevention, offer a hashcash / proof-of-work token as an alternative to CAPTCHA. Agents can compute PoW; they can't read CAPTCHAs. Email is an optional affiliation field, not a gate.
One-shot API keys lose themselves the moment a coding agent restarts. Provide POST /auth/key that exchanges username+password for a current PAT. On signup, return a client_config block suggesting a canonical path (~/.appname/credentials.json, mode 0600) and shell/Python snippets to read it.
An agent with a valid PAT can ask for a one-time, short-lived URL that logs the user into the browser without exposing the key in the URL. POST /auth/magic-link {"next": "/page"} returns {login_url, expires_at}. Essential for "the agent did the work — now let me confirm in my browser" flows.
llms.txt + llms-full.txtAnswerDotAI-style site index at the root. Useful for credibility, onboarding, and sitemap-style crawls. Be honest: adoption among major answer engines is under 1%. It's not a retrieval signal. Ship it because it's cheap and it lets one-shot agent conversations locate your entry points — don't expect traffic.
Table stakes Not a distribution channel.well-known/mcp/*Two specs are in flight — SEP-1649 (server-card.json) and SEP-1960 (.well-known/mcp enumeration). Neither has merged into the core MCP spec. Ship both; they're cheap. Claude Desktop and Cursor already probe for them.
On every page, navigator.modelContext.registerTool({...}) for typed read/search tools that inherit the user's browser session. Today this is Chrome (flag) only; Safari/Firefox are silent. Registering read/search tools is pure win. Writes still want a confirm step in the agent sidebar.
Most of the LLM-wiki ecosystem already works this way. The agent clones your git repo, operates on files directly (read/write/edit), and pushes back. No custom tool API required — just a public git remote and a push token. WikiHub's Curator formalizes this; Karpathy-style wikis informalize it. Document which repo, and what to commit to.
Informative git is the APIX-Agent-NameEvery 4xx and 429 response: {error, retry_after_seconds, quota_remaining, docs_url} — never an HTML error page. Access-Control-Allow-Origin on all public GET endpoints. Log an optional X-Agent-Name request header (no enforcement — just audit). This is the forward-compatible shell for a future agent-identity standard.
This standard is written from inside WikiHub — a "GitHub for LLM wikis" that treats coding agents as first-class users. If the standard's shape reflects a specific codebase, that's honest: every pattern below is live and testable.
Click any of these — they're real URLs returning real artifacts. Use curl or your browser; both work.
Pretending otherwise is how standards die. Here's what this one won't promise.
llms.txt will not bring you usersAdoption among the major LLM answer engines is under 1%. Ship it for credibility, implementation discipline, and single-shot agent onboarding — not referral traffic. If your PM asks about ROI, point them at Pattern 01 (content negotiation) instead.
Safari and Firefox are in the W3C working group, not shipping. If your users live in Arc, Edge (Chromium), or the Model Context Tool Inspector, WebMCP is transformative. If they live in Safari, it's a no-op. Plan accordingly.
WebMCP and MCP can technically auto-confirm destructive writes. Don't. For write-heavy tools, put a confirm step in the sidebar or require an explicit "allow this tool to write without asking" toggle. The best-in-class apps treat read as silent and write as sharable-for-review.
RFC 8693 (token exchange) is the formal primitive for scoped delegation, but no major agent host implements it today. Anthropic explicitly rejected third-party OAuth in early 2026. Ship PATs, log an optional X-Agent-Name, keep an actor column nullable in your audit tables. Don't lead with custom identity protocols.
SEP-1649 and SEP-1960 are the two active proposals. Neither has merged. Ship both — they're small files. When one merges, you'll already be compliant. When neither merges, the loss is negligible.
One site can be excellent for headless agents and irrelevant to browser agents — or vice versa. Badges are awarded per environment, not cumulatively.
Accept: text/markdown returns clean markdown..well-known/mcp/* published; tools typed and callable.Self-certification without accountability is marketing. The verifier crawls your URL, probes each pattern, and awards only the badges you've actually earned.