From fine-tuning competitor tracking to analyzing sentiment across different AI models, this guide breaks down every sub-page and setting in the dashboard. Use this resource to navigate granular data, configure advanced project preferences, and interpret the specific metrics that drive your Generative Engine Optimization (GEO) success.
Table of Contents
Recommendations
Competitors
Tags
Overview
The Overview page is your "Radar," aggregating metrics across all tracked AI models (ChatGPT, Gemini, Perplexity, etc.).
Filters: Apply filters for date range, specific models, or tags to narrow your view.
Brand Mentions: The total count of text-based mentions in AI responses.
Visibility: The percentage of responses where your brand appears compared to the total volume.
AI Visibility Score: A position-weighted metric. You receive 100 points for a #1 ranking, 50 points for #2, decreasing down the list.
Share of Voice: Visualizes your dominance against competitors. Viewable as a pie chart, trend line, or percentage breakdown.
Prompts
This section manages the specific queries you track.
Prompt List: View and manage all active queries. You can delete irrelevant prompts individually or in bulk.
Prompt Detail Page: Clicking a prompt reveals granular diagnostics:
Mentioned vs. Seeded: "Mentioned" means your brand appears in the text; "Seeded" means your URL appears as a source link.
Matched Patterns: Shows exactly which "Matching Name" triggered the detection.
Fan-out Queries: Displays the background searches the AI performed on search engines (e.g., Google/Bing) to generate its answer.
Responses
The Responses tab is your historical archive.
History: Because AI answers change, this page stores responses permanently based on your retention plan.
Comparison: Navigate through previous dates to see how an answer looked in the past versus today.
Cadence: Data is tracked weekly by default but can be scheduled for daily or monthly updates.
Citations
This view analyzes the sources driving your visibility.
Citation Rate: Measures how frequently a specific link appears across all responses.
Source Breakdown: View citations grouped by Domain, Host, or Page.
Mentions You: A status indicator that confirms if the cited page actually contains a text mention of your brand (requires up to 24 hours to populate).
Sentiment
AI analyzes the qualitative tone of your mentions.
Sentiment Score: Responses are tagged from "Very Positive" to "Negative".
Topics Overview: A heatmap showing themes associated with your brand, such as "Features," "Design," "Performance," or "Ease of Use"
Pro Tip: Use the filters to sort by "Negative" to immediately identify reputation risks. Make these your top priorities for new content.
AI Model Comparisons
A matrix view for cross-platform analysis.
Horizontal Analysis: Compare your brand's performance across specific models (e.g., strong in Gemini, weak in Perplexity).
Metric Toggles: Switch the view to compare Citations, Citation Rate, or Net Sentiment across the different models.
Query Fan Out
This section aggregates the search data used by AI models.
Keyword Strategy: View the actual queries LLMs typed into search engines to find answers about you.
Filtering: Filter query data by prompt, tag, country, or language to refine your SEO keyword research.
Content
Your execution engine. Content bridges the gap between data and creation by using your specific account metrics (prompts, citations, and competitor gaps) to generate high-impact content strategies. Use this suite to prioritize your next moves and instantly create the assets needed to win.
Please Note: Content features are currently in public beta.
Recommendations
The "Quest Board" for your content strategy.
Priority: Recommendations are sorted into High, Medium, and Low priority based on potential impact.
Context: Each card explains the "Why" (e.g., a content gap vs. a competitor) and provides specific action items.
Data-Driven: Recommendations are generated using your specific account data (prompts, mentions, and citations), not generic advice.
View details to drill down on individual recommendations and view priority, action items, linked references and an estimate of impact based on your planned activities.
Prompt Research
More methods to expand your tracking coverage.
Manual: Add prompts one by one.
Bulk: Upload a list (one per line) and assign tags.
People Also Asked: Indexes Google’s "People Also Asked" questions based on a seed keyword.
Topics: Generates prompts based on a broad topic category.
Translate: Translates and localizes existing prompts for new international markets.
Preferences
Use these preferences to customize your AI Visibility dashboard and reporting to suit your unique needs.
Competitors
Manage who you are benchmarking against.
Add Competitors: Use the "Suggested" list or manually add brands by name/ID.
Visualization: Customize chart colors for specific competitors to make reports easier to read.
Tags
Organize your data for better filtering.
AI Grouping: The system can automatically cluster your prompts into relevant groups (e.g., "Brand," "Local," "Social").
Manual Tags: Create custom tags and assign prompts manually or via CSV upload.
Pro Tip: Don't skip tags. Think of tags as the "connective tissue" of your strategy. While they may appear as a preference, they are a critical tool for performance analysis. Without them, your data is just a massive list of prompts; with them, you can instantly "slice and dice" your metrics to see how you're winning in specific product lines, industries, or high-intent buyer stages. Take five minutes now to cluster your prompts into tags so you can evaluate broad performance patterns rather than getting lost in individual responses.












