Skip to main content

2026.01.15 - AI Visibility: Getting Value in the First 30 Minutes

AI Visibility is a content decision engine for Generative Engine Optimization (GEO). Track "Mentioned" vs "Cited" metrics across AI models, analyze sentiment, and generate prioritized content tasks to ensure your brand wins the answer.

Updated this week

AI Visibility helps you measure and improve how AI models present your brand in answers. Your goal is to answer two questions fast:

  1. Are we Mentioned in the response text?

  2. Are we Cited in the response sources/links?

Everything else in the platform exists to help you diagnose why those outcomes are happening and what to do next.

Availability & Access AI Visibility is a premium intelligence feature. Access is limited by plan and user role:

  • Subscription: Available only to customers on Scale plans and higher. (Basic plans do not have access).

  • User Permissions: Visible only to users with Standard permissions or higher. (Guest and Participant users cannot see this feature).

30-Minute Quickstart

Follow these steps to get immediate value from the tool:

  1. Open Overview.

  2. Set a date range.

  3. Filter by model (optional).

  4. Go to Prompts.

  5. Open 3–5 high-value prompts.

  6. Inside each prompt:

    • Check “Mentioned” and “Cited” status.

    • Open the model response text.

    • Review “Fan-out queries.”

    • Review “Citations.”

    • Review “By page.”

  7. Go to Citations (global). See which domains/pages are shaping answers most often.

  8. Use By Page (global). Pick 1 page to update or 1 missing page to create.

  9. Export results (CSV/Excel) and share with your team.


Mentioned vs. Cited: The Two Signals

AI Visibility is built around two core outcomes. Understanding the difference is critical for your strategy.

Mentioned

  • What it is: Your brand appears inside the AI response text (text-based mention).

  • What it tells you: Are you being recommended or referenced as part of the answer?

Cited

  • What it is: Your website/pages show up in the response sources/links (citations).

  • What it tells you: Is your content being used as evidence?

Interpreting the Combinations

  • Mentioned + Cited: Best outcome (Recommendation + Proof).

  • Mentioned but not Cited: You’re named, but not sourced.

  • Cited but not Mentioned: You’re sourced, but not credited in the narrative.

  • Neither: Your brand is not present.


Diagnosing Outcomes & Next Steps

If you are Mentioned but not Cited

Your brand exists in the model’s knowledge, but your website isn’t winning the citation slot.

  • Action: Open the prompt and go to the “Citations” section. Identify which domains are being cited instead.

  • Fix: Update existing pages and/or create a page that more directly answers the prompt intent found in the "By Page" view.

If you are Cited but not Mentioned

Your content is being used as evidence, but the model isn’t naming you as the recommendation.

  • Action: Open the prompt response text and review how competitors are presented.

  • Fix: Update your cited pages to clearly position your brand as a top option (not just a provider of general information).

If you are Neither

You likely need better prompt coverage, better competitor coverage, or improved mention detection.

  • Action: Use Prompt Research to expand your prompt list.

  • Fix: Confirm your competitor list includes true category competitors and verify your Matching Names include realistic brand spellings.


Managing Prompts

Prompts are the specific questions AI Visibility tracks across models.

Adding Prompts

  • Batching: Add prompts in batches rather than one-offs.

  • Mix: Include buyer-intent, comparison, use-case, and category prompts.

  • Alignment: Ensure prompts reflect the specific questions you want to win.

Removing Prompts

Remove prompts if they are irrelevant, generate noisy/unhelpful answers, or do not reflect your actual buying journey.

  • How: Go to Prompts > Select prompts (bulk or individual) > Delete.

Exporting Data

All tables are exportable (CSV or Excel). Use this to share proof with stakeholders.


The Prompt Detail Page

This page is your diagnostic tool for a single query.

Key areas to check:

  1. Response Status: Initial fills may take a few minutes.

  2. Metadata: Check execution date, country, and language, as these affect which competitors appear.

  3. Mentioned/Cited Indicators: The main outcome per model.

  4. Mentions (Competitors): Lists only brands you have defined as competitors.

  5. Matched Patterns: Shows how the system recognized your brand (check this if mentions look low).

  6. Response Text: The full answer text stored for historical comparison.

  7. Fan-out Queries: The searches the model performed to build the answer. Use these for keyword research.

  8. Citations: The sources included in the response. Note: Citation crawling can take up to 24 hours.

  9. By Page: Groups the cited URLs for that prompt to show which pages are shaping the answer.


Citations & "By Page" Views

Citations (Global)

This view collects all links that show up in responses across your project.

  • View: Citation rate, total citations, and breakdowns by domain/host/page.

  • Action: Filter to a specific tag, identify top cited domains, and ask, "Why are these pages being used as evidence?"

By Page (Action View)

This view connects a specific URL to the prompts where it appears.

  • Action: Sort by citations, open a URL, and review the prompts driving it.

  • Decision: Decide whether to update the existing page or create a new page designed for that specific prompt cluster.


Advanced Features

Tags

Tags allow you to group prompts (e.g., "Brand," "Product A," "Competitors") for clean filtering in reporting. You can create tags manually or use AI to cluster prompts automatically.

Prompt Research

Use this to find new prompts without guessing.

  • People Also Asked: Analyzes Google’s related questions based on a seed keyword.

  • Magic Prompts: AI agent generates prompts based on context.

  • Topics: Generates prompts around a specific theme.

  • Translation: Localizes prompt sets for different regions.

Sentiment

Tracks the narrative quality of the answers. Use this to identify reputation risks or consistent negative themes in how your brand is described.

Content Recommendations & Tasks

This section converts data into deliverables.

  1. Recommendations: View high-priority actions based on your project data.

  2. Tasks: Generate actual content assets like Briefs, Articles, or PR Insights.

  3. Export: Download tasks as Markdown, HTML, or PDF.


FAQ

  • Why is a model still processing? This usually means the system is still collecting responses. New prompts and projects may take a few minutes to fill.

  • Why does it say citations are empty? Citation crawling and "mentions you" status verification can take up to 24 hours to populate fully.

  • Why don’t I see a competitor mentioned? Competitor mentions only show for brands you have explicitly defined in your project settings.

  • Why does our mention count look wrong? Check your Matching Names settings to ensure the detection logic covers all variations of your brand name.

Did this answer your question?