L1–L4 visibility scoring
Four scores. One for whether AI knows you exist (L1), one for the depth of that knowledge (L2), one for whether you get recommended (L3), and one for which of your URLs get cited (L4). Each scored daily, against every model you track.
Multi-model coverage
ChatGPT (GPT-4o, GPT-4 Turbo, GPT-5), Claude (Sonnet 4.6, Opus 4.7), Gemini (1.5 Pro, 2.0 Flash), Perplexity (Sonar, Sonar Pro), and Grok. Add models as they launch — a single watchlist runs against all of them.
Citation tracing
When a model cites a URL, we capture it, dedupe across runs, and graph the citation web around your brand. See which of your pages do the heavy lifting and which competitor pages are out-citing yours.
Gap reports
BotScope tells you what AI is getting wrong — missing facts, wrong figures, outdated positioning, named competitors that should be you. Each gap links to the exact response that surfaced it.
Competitor delta tracking
Add competitors to your watchlist. See where they outperform you on a per-query, per-layer, per-model basis. Watch the gap close — or widen — week over week.
Sentiment + claim drift
When the way AI talks about you changes — tone, claims, proof points — BotScope flags the drift. You see the new claim and the old claim side by side, with timestamps.
Daily scan cadence
Most tools run weekly snapshots. BotScope scans every 24 hours so the data behind your decisions reflects what AI is saying right now, not last Tuesday.
Public reports + embeds
Export visibility reports as PDFs, share read-only links with stakeholders, or embed live charts on your own site. Everything snapshot-stable so the numbers you sent are the numbers they see.