Free tool

Agent Readiness Checker

See how AI agents like ChatGPT, Comet, and Gemini perceive your site

Check your Agent-Ready score

Takes ~5 seconds

Checks 4 agent-UX signals: <main>/<nav> landmarks, form labels, and semantic buttons. Based on web.dev agent UX guidelines.

AI agents like ChatGPT, Comet, and Gemini browse the web differently from search engines — they read the accessibility tree, not just rendered pixels. This tool checks the signals Google's web.dev agent-UX guide flags as critical: semantic landmarks, accessible form labels, and semantic buttons. Get a 0–100 Agent-Ready score with weighted, actionable fixes.

Score how well AI agents can navigate, fill forms, and act on your page.
Spot gaps in landmarks (<main>, <nav>), unlabelled inputs, and div-buttons that disappear in the accessibility tree.
Get weighted, actionable fixes — see which checks matter most and exactly how to resolve them.

How it works

Get started in 3 simple steps

1

Enter any public URL you want to analyse.

2

The checker fetches the page and inspects 4 agent-UX signals.

3

Review your Agent-Ready score and per-check guidance for each gap.

Best use cases

Built for teams that take AI visibility seriously

1

Marketing teams future-proofing landing pages for ChatGPT and Comet agent traffic.

2

AEO and SEO teams adding agent-UX to their pre-launch QA checklist.

3

Engineering leads adding agent-readiness to their accessibility QA.

Want continuous monitoring instead of one-off checks?

Start free trial

FAQ

Frequently asked questions

What does Agent-Readiness measure?

It measures how well your page exposes the signals AI agents need to act — semantic HTML landmarks (<main>, <nav>), accessible names on form inputs, and semantic <button>/<a> instead of <div onclick>. These are the factors Google's web.dev agent UX guide flags as the agent equivalent of mobile-friendliness.

How is this different from an AEO score?

AEO measures content discoverability — whether your page is indexable, structured, and citable by answer engines. Agent-Readiness measures whether agents can actually navigate and take action on your page once they arrive. Both matter; this tool focuses on the action-side.

How is this different from a regular accessibility audit?

Most accessibility audits flag dozens of WCAG issues at equal weight. Agent-Readiness focuses on the small set of signals AI agents actually depend on (landmarks, labels, semantic buttons) and weights them by agent impact — so the fixes you make move the needle for both AI agents and assistive tech, not just compliance scores.

Why are landmarks like <main> important?

AI agents and screen readers use landmarks to identify primary content, navigation, and other regions. Without <main>, an agent has to guess where the page's content begins — increasing the chance of acting on the header, sidebar, or unrelated boilerplate. Adding <main> is one HTML tag and improves agent reliability dramatically.

What does "non-semantic buttons" mean?

Many sites build buttons with <div onclick="…"> or <span role="button"> for styling reasons. Those elements are invisible in the accessibility tree, so agents and screen readers don't see them as actionable. Replacing them with native <button> or <a href> elements (styled however you like) makes them discoverable to agents.

Start for free

Turn AI visibility insights into growth

Create your workspace to monitor AI visibility and activate optimization workflows.