Dimensions

Five dimensions of agent experience

The AX Score evaluates any product across these five dimensions. Each scored 0 to 20. Maximum total: 100.

AX Score: all 5 dimensions
# Dimension What it measures Max
1DiscoverabilityCan agents find the brand in category queries?20
2NavigabilityCan agents accurately describe what you do?20
3OperabilityCan agents complete tasks using the product?20
4RecoverabilityDo agents self-correct when wrong about you?20
5TransparencyDoes AI represent your limitations accurately?20
Total AX Score100
01
Discoverability

Discoverability measures whether agents can find your product when researching a category. When a user asks an AI for tool recommendations, does your product appear in the answer?

Strong AX

  • Mentioned prominently in category queries across models.
  • Accurately placed in the correct product category.
  • Consistent positioning across ChatGPT, Perplexity, Claude, and Gemini.
  • Appears in comparison queries alongside direct competitors.

Weak AX

  • Omitted from category listings entirely.
  • Confused with a competitor by one or more models.
  • Appears in some models but invisible in others.
  • Associated with outdated or incorrect category terms.
Dimension 1: Discoverability (0 to 20 points)
Score Band Label What it means
16 to 20ExcellentFound accurately across all major AI platforms
11 to 15GoodFound reliably, some positioning gaps remain
6 to 10FairFound inconsistently across models
0 to 5PoorNot found in AI category queries
02
Navigability

Navigability measures whether agents can accurately describe your product: what it does, who it serves, and how to get started.

Strong AX

  • Agents describe your product's function accurately.
  • Agents correctly identify your primary use case.
  • Agents know where to send users who want to learn more.
  • Agents can explain your differentiation from competitors.

Weak AX

  • Agents describe a feature set you do not have.
  • Agents describe your product as serving a market you do not.
  • Agents confuse your pricing, plans, or availability.
  • Agents cannot explain what makes your product different.
Dimension 2: Navigability (0 to 20 points)
Score BandLabelWhat it means
16 to 20ExcellentAccurate, consistent, and differentiated description
11 to 15GoodMostly accurate, occasional gaps in specifics
6 to 10FairCore description accurate, key details often wrong
0 to 5PoorAgents consistently misrepresent the product
03
Operability

Operability measures whether agents can use your product to complete tasks: both through your API and by recommending your product for specific jobs.

Strong AX

  • API returns typed, structured, self-describing responses.
  • Agents can authenticate without browser-only OAuth flows.
  • Error messages include recovery guidance, not just status codes.
  • Rate limits are declared upfront with retry information.

Weak AX

  • API returns generic error messages with no context.
  • Authentication requires a visual browser flow.
  • Endpoints behave differently based on undocumented conditions.
  • Rate limit responses include no Retry-After guidance.
Dimension 3: Operability (0 to 20 points)
Score BandLabelWhat it means
16 to 20ExcellentTasks complete reliably with full recovery support
11 to 15GoodMost tasks succeed, error handling needs work
6 to 10FairBasic tasks work, failures are opaque
0 to 5PoorCore agent tasks cannot be completed reliably
04
Recoverability

Recoverability measures whether agents self-correct when they hold wrong information about your product.

Strong AX

  • Agents update incorrect beliefs when given accurate context.
  • Agents flag uncertainty rather than asserting incorrect facts.
  • Agents distinguish between current and outdated information.
  • Agents recommend verification for rapidly changing details.

Weak AX

  • Agents confidently assert false information about your product.
  • Agents repeat incorrect pricing or feature claims.
  • Agents cannot distinguish your product from a competitor.
  • Agents describe deprecated features as current.
Dimension 4: Recoverability (0 to 20 points)
Score BandLabelWhat it means
16 to 20ExcellentAgents reliably update and flag uncertainty
11 to 15GoodAgents usually self-correct, with occasional persistence
6 to 10FairAgents sometimes self-correct with strong prompting
0 to 5PoorAgents repeat errors even when corrected
05
Transparency

Transparency measures whether agents accurately represent your product's limitations, scope, and appropriate use cases.

Strong AX

  • Agents acknowledge what your product cannot do.
  • Agents qualify recommendations with relevant caveats.
  • Agents represent your pricing and access model accurately.
  • Agents can articulate when to use a competitor instead.

Weak AX

  • Agents overclaim your product's capabilities.
  • Agents omit limitations that affect buying decisions.
  • Agents describe your product as suitable for all use cases.
  • Agents describe enterprise-only features as universally available.
Dimension 5: Transparency (0 to 20 points)
Score BandLabelWhat it means
16 to 20ExcellentAgents represent capabilities and limits precisely
11 to 15GoodAgents usually accurate, some overclaim remains
6 to 10FairAgents omit significant limitations regularly
0 to 5PoorAgents consistently overclaim your capabilities