Dimensions
Five dimensions of agent experience
The AX Score evaluates any product across these five dimensions. Each scored 0 to 20. Maximum total: 100.
| # | Dimension | What it measures | Max |
|---|---|---|---|
| 1 | Discoverability | Can agents find the brand in category queries? | 20 |
| 2 | Navigability | Can agents accurately describe what you do? | 20 |
| 3 | Operability | Can agents complete tasks using the product? | 20 |
| 4 | Recoverability | Do agents self-correct when wrong about you? | 20 |
| 5 | Transparency | Does AI represent your limitations accurately? | 20 |
| Total AX Score | 100 |
Discoverability measures whether agents can find your product when researching a category. When a user asks an AI for tool recommendations, does your product appear in the answer?
Strong AX
- Mentioned prominently in category queries across models.
- Accurately placed in the correct product category.
- Consistent positioning across ChatGPT, Perplexity, Claude, and Gemini.
- Appears in comparison queries alongside direct competitors.
Weak AX
- Omitted from category listings entirely.
- Confused with a competitor by one or more models.
- Appears in some models but invisible in others.
- Associated with outdated or incorrect category terms.
| Score Band | Label | What it means |
|---|---|---|
| 16 to 20 | Excellent | Found accurately across all major AI platforms |
| 11 to 15 | Good | Found reliably, some positioning gaps remain |
| 6 to 10 | Fair | Found inconsistently across models |
| 0 to 5 | Poor | Not found in AI category queries |
Navigability measures whether agents can accurately describe your product: what it does, who it serves, and how to get started.
Strong AX
- Agents describe your product's function accurately.
- Agents correctly identify your primary use case.
- Agents know where to send users who want to learn more.
- Agents can explain your differentiation from competitors.
Weak AX
- Agents describe a feature set you do not have.
- Agents describe your product as serving a market you do not.
- Agents confuse your pricing, plans, or availability.
- Agents cannot explain what makes your product different.
| Score Band | Label | What it means |
|---|---|---|
| 16 to 20 | Excellent | Accurate, consistent, and differentiated description |
| 11 to 15 | Good | Mostly accurate, occasional gaps in specifics |
| 6 to 10 | Fair | Core description accurate, key details often wrong |
| 0 to 5 | Poor | Agents consistently misrepresent the product |
Operability measures whether agents can use your product to complete tasks: both through your API and by recommending your product for specific jobs.
Strong AX
- API returns typed, structured, self-describing responses.
- Agents can authenticate without browser-only OAuth flows.
- Error messages include recovery guidance, not just status codes.
- Rate limits are declared upfront with retry information.
Weak AX
- API returns generic error messages with no context.
- Authentication requires a visual browser flow.
- Endpoints behave differently based on undocumented conditions.
- Rate limit responses include no Retry-After guidance.
| Score Band | Label | What it means |
|---|---|---|
| 16 to 20 | Excellent | Tasks complete reliably with full recovery support |
| 11 to 15 | Good | Most tasks succeed, error handling needs work |
| 6 to 10 | Fair | Basic tasks work, failures are opaque |
| 0 to 5 | Poor | Core agent tasks cannot be completed reliably |
Recoverability measures whether agents self-correct when they hold wrong information about your product.
Strong AX
- Agents update incorrect beliefs when given accurate context.
- Agents flag uncertainty rather than asserting incorrect facts.
- Agents distinguish between current and outdated information.
- Agents recommend verification for rapidly changing details.
Weak AX
- Agents confidently assert false information about your product.
- Agents repeat incorrect pricing or feature claims.
- Agents cannot distinguish your product from a competitor.
- Agents describe deprecated features as current.
| Score Band | Label | What it means |
|---|---|---|
| 16 to 20 | Excellent | Agents reliably update and flag uncertainty |
| 11 to 15 | Good | Agents usually self-correct, with occasional persistence |
| 6 to 10 | Fair | Agents sometimes self-correct with strong prompting |
| 0 to 5 | Poor | Agents repeat errors even when corrected |
Transparency measures whether agents accurately represent your product's limitations, scope, and appropriate use cases.
Strong AX
- Agents acknowledge what your product cannot do.
- Agents qualify recommendations with relevant caveats.
- Agents represent your pricing and access model accurately.
- Agents can articulate when to use a competitor instead.
Weak AX
- Agents overclaim your product's capabilities.
- Agents omit limitations that affect buying decisions.
- Agents describe your product as suitable for all use cases.
- Agents describe enterprise-only features as universally available.
| Score Band | Label | What it means |
|---|---|---|
| 16 to 20 | Excellent | Agents represent capabilities and limits precisely |
| 11 to 15 | Good | Agents usually accurate, some overclaim remains |
| 6 to 10 | Fair | Agents omit significant limitations regularly |
| 0 to 5 | Poor | Agents consistently overclaim your capabilities |