✦ Not astrology, but astronomy of AI SEO ✦

Query Explorer for AI SEO

Get Live fanout queries and URLs that ChatGPT, Gemini & Perplexity cite to answer a query; plus Citation Signal (CS) to identify citable content topics.

Single Query | Bulk Queries ⤴

*Free. No sign-up.

How to Explore Queries in LLMs for AI SEO?

1. Input

Enter a query like your audience/customer would in an LLM. Click ‘Analyze Query’.

2. Output

Get fanout_queries, & citations that top LLMs web-search to answer that query. Live!

3. Apply

Use the insights of how LLMs answered that query to inform your AI SEO strategy.

FAQs on LLM Query Exploration for AI SEO

What is AI SEO?

AI SEO is the practice of understanding how AI systems (like ChatGPT, Gemini, or Perplexity) find, interpret, and cite information from the web when answering user queries.

Interestingly, many SEO leaders believe, AI SEO is mostly just SEO best practices on steroids. Which we also believe to be true.

How is AI SEO different from traditional SEO?

Traditional SEO focuses on search engines and result pages. AI SEO focuses on LLM answers and citations.

Now, since, search engines rank links, while LLMs synthesize answers (sometimes with citations, sometimes without) so, AI SEO is less about “getting ranked” and more about "getting cited".

How do LLMs like ChatGPT and Gemini change how search works?

Instead of showing 10 blue links, LLMs:
• break a query into sub-queries
• search the web selectively
• pull from a handful of sources
• summarize the result

This means fewer sources get exposure, but those that do matter more.

What is the difference between SEO, GEO, AEO and AIO?

Well, SEO stands for Search Engine Optimization. It's been there since... search engines (even before Google). And any effort to optimize for seach engines is the art & science of SEO.

However, since the inception of LLMs and GenAI like ChatGPT, Gemini, etc., there has emerged the practice of optimizing to show up there. And efforts in that regard are termed:
• GEO (Generative Engine Optimization): Optimize for generative engines like ChatGPT and Gemini
• AEO (Answer Engine Optimization): Optimize for answer engines like Perplexity

AIO is different. It stands for AI Overviews. It's a feature of Google – to show an AI-generated snippet of a query on its result page.

The AEO & GEO overlap to the point that they are used interchangeably. But, typically, GEO cares deeply about how LLMs source information.

Is AI SEO about rankings, visibility, or citations?

Primarily citations and mentions. Because of the way LLMs (where you want to do AI SEO) work. If an LLM cites you, you’re visible by default.

That’s why tools like QueryCat.app focus on what LLMs actually cite.

How do LLMs decide which sources to cite?

There’s no public rulebook. But patterns exist:
• authority
• clarity
• topical relevance
• diversity of sources
• freshness (sometimes)

QueryCat.app surfaces these patterns by showing fan-out queries and cited URLs — in both Single Query and Bulk Queries modes.

Do keywords still matter in AI SEO?

Yes — but differently. Keywords help LLMs:
• interpret intent
• expand into fan-out queries
• choose what to search for

AI SEO is less about exact matches and more about semantic coverage.

What is AI keyword research, and how is it different?

Traditional keyword research asks: “What do people search for?”
AI keyword research asks: “How does an LLM expand and explore this question?”

QueryCat.app is built exactly for this — comparing how multiple queries trigger web search and citations.

Why are citations becoming important in AI-driven search?

Because citations are the only transparent signal both users and brands can see. No SERP position. No impression count. Just: who got referenced.

That’s why AI SEO is drifting toward citation analysis.

What are AI visibility tools?

AI visibility tools attempt to predict or score how visible a brand or page might be in AI answers; and recommend actionables for AI SEO.

They usually rely on:
• crawled content
• inferred prompts
• proprietary scoring models

What are typical features of AI visibility tools?

Common features of AI visibility tools include:
• visibility scores
• brand mention tracking
• prompt simulations
• dashboards and trend lines

Most focus on outcomes. But, frankly, their methodologies are opaque. And claims based on in-house research.

Why is AI SEO often expensive and opaque?

Because: LLM behavior isn’t deterministic

So, plausibly, AI SEO tools or vendors rely on black-box models making predictions based on huge sets of data. With large, talented teams. Tonnes of features. Then branding on top. It all adds up to ~$150 – $200/month as the starting price for AI SEO Tools in general.

Is AI SEO reliable, or is it still experimental?

It’s early. And experimental. Some tools may claim reliability to different degrees. It’s better to be skeptical (not cynical, though) and dig deeper to learn of their methodologies.

By the way, that’s why GEO instrument-style tools (like QueryCat.app) have a place alongside prediction tools right now. Understand first. Optimize later.

Can QueryCat.app help with AI SEO?

Yes — but in a very specific way. QueryCat.app doesn’t tell you what to do to get cited. Or, track your website, your pages, brand keyword/phrases, and the likes. Because, to be frank, the way things are changing in the SEO/GEO space, any such actionables could be obsolete before their results could be observed.

It takes a more ‘show, don’t tell’ approach based on real data provided by LLMs themselves:
• when they search
• what they search for
• what they cite

We do it in two modes: Single Query and Bulk Queries.

What is an LLM query?

An LLM query is the question a user asks an AI system like ChatGPT, Gemini, or Perplexity. Also called the prompt. But internally, that single question often is broken down into multiple smaller queries; especially if a web search is required to answer it.

That internal process is what QueryCat.app helps you see.

How is an LLM query different from a Google search query?

Firstly, people tend to write short (sometimes even single word) query on Google, but on LLMs, they tend to ask long queries in proper question format. Then, a Google query usually stays as-is when responding. An LLM query rarely does.

Instead, LLMs reinterpret, expand, and reformulate your question before answering it. That’s why two users asking “the same thing” may trigger different searches.

What are fanout queries, exactly? Why do LLMs generate fan-out queries?

Fanout queries are the sub-queries that an LLM generates internally to answer a question properly. Because one long, multi-question query is often vague, incomplete, or ambiguous for LLMs to answer.

Think of them as: “If I had to answer this well, what else would I need to know?”

They’re not guesses. They’re like different angles to the same question.

When do LLMs decide to use web search instead of memory?

LLMs do not always web search or cite sources. By default, they try to answer a query from their in-memory/training data.

However, they tend to make web searches when the model isn’t confident in its training data and:
• the topic is recent
• accuracy matters
• multiple viewpoints exist

How does web search work inside LLMs?

Different LLMs make their web search differently. However, broadly they follow these steps:
1. The LLM interprets the query to see if it requires a web search. If not, they simply answer from memory, else, they go to step 2.
2. It generates fan-out queries
3. It performs targeted web searches
4. It selects and cites sources
5. It synthesizes an answer

Then this synthesized answer is presented to the user along with the links to cited sources.

Why do different LLMs answer the same query differently?

Because they are tuned differently. Which is totally a decision of the LLM providers.

Broadly we can assume that they answer the same query differently because they:
• interpret intent differently
• generate different fan-out queries
• search different sources
• apply different citation thresholds

In fact, if you ask a query to one LLM twice, it’ll not only frame/write the answer differently, but also cite different sources.

That’s why QueryCat.app compares ChatGPT, Gemini, and Perplexity side by side to paint a bigger picture to explore LLM Query in detail.

Do LLMs always cite sources when answering queries?

No. Not always. And, for AI SEO, it helps to know when and what query does NOT trigger LLMs to cite sources when answering as much as knowing otherwise.

Because, if it’s not citing, it’s answering from memory, and there is little to none that you can do to get brand mention. It’s only when they cite, that there’s an opportunity to be mentioned.

So, know the difference and focus on the one that can yield. And this is a feature of QueryCat.app.

How stable is LLM query behavior over time?

Not very much. All LLM providers are constantly updating the memory of their models—both old and new.

So, you might experience variable behavior regarding LLM query over time.

Can understanding LLM queries help with AI SEO or GEO?

Yes — but indirectly, and that distinction matters.

Understanding LLM queries helps you see how a model interprets intent, how it breaks a question down, and when it decides to search the web instead of relying on memory.

That insight can guide how you think about topics, coverage, and supporting content — without pretending there’s a guaranteed outcome.

What parts of LLM query behavior are observable?

Honestly, not all. It differs for different LLMs. It even differs in what’s observable on chat UI and with the API of an LLM.

As of Dec ‘25, with ChatGPT and Perplexity API, you can observe:
• whether an LLM triggered web search
• which URLs it cited and their page title

In the case of Gemini API, you get fanout_queries upfront.

You CANNOT observe:
• internal model weights
• ranking formulas
• or how the same query might behave tomorrow

QueryCat.app is explicit about this boundary. It shows what’s observable, infers fanout_queries (reliably with SEO best practices), Source Diversity (with a simple heuristic), and calculates its own Citation Signal (which is based on the cited sources' count, diversity and distribution).

Most importantly, it chooses to stay quiet about what isn’t observable.

How does QueryCat.app analyze LLM queries?

QueryCat.app runs real queries through real LLMs (their respective APIs, not chat interfaces) and captures what they show.

That includes: whether a web search was triggered, and what title + URL pair were referred to answer the query.

From that, it infers fanout_queries (for ChatGPT and Perplexity; Gemini provides it up front), Source Diversity (with a simple heuristic), and calculates its own Citation Signal (formula in QueryCat.app tab of FAQ).

In Single Query, you explore one query in detail.
In Bulk Queries, you compare multiple queries side by side to spot patterns.

No simulations. No synthetic prompts. Just live observable behavior.

What is QueryCat.app, exactly?

QueryCat.app is a LLM Query Explorer.

It helps you see how large language models like ChatGPT, Gemini, and Perplexity actually answer a query — including when they search the web, what they search for, and which sources they cite.

Think of it as an AI SEO tool or instrument to look under the hood of LLMs.

“Not astrology, but the astronomy of AI SEO” —what’s up with this tagline?

Out tagline is both a statement to our users and a reminder to ourselves: understand AI SEO first, optimize later.

A lot of AI SEO today feels like astrology — bold predictions, vague scores, and confident claims without showing the underlying evidence. Astronomy, on the other hand, is about observation. You don’t predict stars by vibes; you study their movement with instruments.

QueryCat.app follows the same philosophy. Instead of guessing outcomes, it shows how LLMs actually search, expand queries, and cite sources. No prophecy — just observation.

What does QueryCat.app show?

Most AI SEO tools try to guess visibility or predict performance. QueryCat.app takes a simpler route: show what’s happening instead of forecasting what might happen.

If you’re trying to understand:
• why some sources get cited
• when LLMs trigger web search
• how query interpretation changes across models

…QueryCat.app gives you direct evidence.

What does QueryCat.app NOT show?

By design, QueryCat.app does not:
• predict brand rankings on LLMs
• track prompts
• lists prompt volume
• estimate LLM traffic
• guarantee citations
• recommend optimizations

Those things require assumptions. QueryCat sticks to evidence. Like an astromer would.

How does QueryCat.app come up with Fanout Queries and Source Diversity?

QueryCat.app runs real queries through real LLM APIs — not the chat interface — and captures what the models actually expose during the process.

Specifically, it looks at:
• whether a web search was triggered
• which title + URL pairs were referenced to answer the query

From there:
• For Gemini, fan-out queries are often provided directly by the model.
• For ChatGPT and Perplexity, fan-out queries are inferred based on the titles and URLs the model cited while answering.

Source Diversity is then calculated by grouping those cited URLs into broad categories (for example: blogs, news sites, brand pages, reference sources) using a simple heuristic.

What is Citation Signal? How do we calculate it?

Citation Signal is a directional metric, not a prediction. It’s derived from observable LLM behavior, such as:
• how many unique sources were cited
• how diverse those sources were
• whether citations were dominated by a single source type

Here’s the exact logic used in the current version:

// Base score from source diversity
let score = Math.min(uniqueUrls.length * 15, 60);

// Bonus for multiple source categories
if (categories.size >= 3) score += 20;
else if (categories.size >= 2) score += 10;

// Penalty for single-category dominance
const maxCategoryCount = Math.max(...Object.values(categoryCounts));
if (maxCategoryCount / uniqueUrls.length > 0.7) score -= 10;

// Cap at 100
score = Math.min(score, 100);


The idea is simple: Out of two queries, the one that triggers more citations, from more varied sources, they are more likely to cite from multiple (and plausible different) sources another time.

Is Citation Signal a prediction or a guarantee of citation?

No. And it’s important to say that clearly: Citation Signal is exactly what the name says—a signal, not a promise.

A few important clarifications:
• Citation Signal (CS) does not measure probability of being cited.
• A 100 CS does not guarantee a content on this topic will be cited.
• It’s meant for comparison and pattern spotting, not forecasting.

In short: it’s a signal you can observe and reason with — not a promise to rely on.

Which LLMs does QueryCat.app currently support?

Right now, QueryCat.app supports three models:
• ChatGPT - gpt-4.1-mini
• Gemini - 2.5 Flash
• Perplexity - Search tool_call that powers all their models

Each model is analyzed separately via its API (not chat interface), so you can see where they behave similarly — and where they don’t.

What’s the difference between Single Query and Bulk Query modes?

Single Query is for deep dives into one query.
Bulk Queries does the same but for 2-5 queries in-a-go.

The logic, and the models behind them both are the same. Just different use cases.

Who should use QueryCat.app?

Anybody interested in building AI SEO (or GEO or AEO) strategy based on real, irrefutable data should use QueryCat.app. That includes SEO freelancers, startup marketers, content strategists and even agencies exploring AI SEO or GEO.

Do I need SEO or AI expertise to use QueryCat.app?

Not really. It’s designed as an intuitive, simple webapp.

If you understand how people ask questions and why sources matter, you are good to go.

Is QueryCat.app free, and are there usage limits?

Free, No limits — as of Dec ‘25, in its beta version.

But it costs us for every hit across the three LLMs. Still, we plan to keep it like this for a month or so. During this period we shall gather feedback, refine the product – both the backend and frontend. So, kindly share and spread the word 🙏

Once we understand its demand and usage pattern, we plan to cap the daily limits and introduce paid plan.

But rest assured, it won’t be in like ~$200/month/seat deal. Cheaper by 1/10th or so because we aim to empower startups and freelancers looking for an affordable and reliable tool for AI SEO strategy.

The goal is to keep it honest, useful, and cheap — not locked behind enterprise pricing.

Does QueryCat.app store my queries or results?

No. Queries are processed to generate results, then discarded. QueryCat.app does not build a database of user searches.

Later, when we introduce a paid option, we shall let users store data on their end.

However, we do encourage users to share non-sensitive query screenshot, csv and experience to improve our product.

What features or LLMs are planned next?

More models, better comparisons, and deeper query-level insights are on the roadmap.

However the underlying pillar will remain the same: honest, useful and cheap.

Where can I request a feature or suggest a collaboration?

Firstly, you came all this way, so thanks for your interest: If you have feedback, a feature idea, or want to collaborate, you’re welcome to reach out.

Email: mayankbishwas@gmail.com
Linkedin: @mayankbishwas