Introducing Our New Service: AI Brand Links

AI Brand Links. I had the idea for this product over a year ago.
The thesis was simple: people have, for a long time, looked to “best of” and listicle-style content to guide their buying decisions.
That same preference is shared by AI search.
When a user asks an LLM questions like “what’s best for [X]” or “which [Y] product is better” or “what are the best [Y],” AI search, through live retrieval, looks for comparison listicles to help guide their reasoning and, ultimately, the answer they return.
These preferences, from both users and AI search, led me to the only logical conclusion: we, Loganix, needed to build a service that surfaced brands and products within the exact content formats that AI systems prefer to retrieve, cite, and quote.
But despite noticing this gap in the market, we didn’t ship, and, much to my frustration, I sat back and watched as competitors launched their own services to fill this demand.
If you look around the market today, you’ll see these products on the SERPs, all looking to surface brands and products in AI search.
So the question for us had become not a matter of if there was demand (there clearly was, and our competitors had moved to meet it), but whether we had missed the boat.
My anxieties were soon put to rest as I began to look at what our competitors had actually built. What I kept seeing was: “We’ll get you mentioned in a top 5 list.”
End of story, and, frankly, that falls short of what is needed.
There was no query research. No logic around which sites can actually compete for which queries. Just a generic brand mention in a generic post on a generic website, then hit publish and hope for the best.
“Uh-huh!” I thought to myself, “We can build something better than that. There’s still hope!”
But the thing was: I didn’t want to just build something for the sake of building something. I wanted to ship a service that was demonstrably better and backed by data.
So we did the research, and what we found has shaped our service into the form it exists in today.
Here’s what that looks like:
What Others Say About You Matters 6.5x More Than What You Say About Yourself
AirOps published a study last year that found: third-party sources (review sites, blog posts, comparison articles, editorial coverage) are cited by AI systems 6.5 times more often than a brand’s own website.
Before we get ahead of ourselves: Yes, what you say on your site matters. I’m not recommending that you shouldn’t sprinkle a little self-promotional dust here and there.
BUT as AirOps found, if you’re only optimizing your own content, you’re working on just 13% of the opportunity. The other ~85% is what other people are saying about you, on other sites, in formats that AI systems are actively pulling from when they build their answers.

Think of it this way: When ChatGPT, Perplexity, et al. answer a question related to your niche or your brand directly, they search the web, pull a set of sources, and extract the information they need, and those sources they pull from are overwhelmingly third-party.
LLMs are trained to trust what others say about you more than what you say about yourself.
Blog Posts Have the Strongest Correlation with AI Recommendations
Surfer analyzed 289,000 URLs and found a correlation between content types and AI recommendation strength.
The most influential? The humble blog post.
Uh-huh, blog posts are 28.7% of what AI cites. What’s more, they have the strongest impact, but make up less than a third of the citation pool.

Combine that with a study from Ahrefs showing ~43.8% of ChatGPT citations come from listicle and best-of content, and we start to see a clear pattern: comparisons, roundups, and curated lists are the content types AI systems are pulling from most heavily.
Why? Well, it goes back to my opening point: The format directly answers the kinds of questions people ask AI, so naturally, live retrieval favors it, and because of that cites it.
The Market Is Selling Placements, But The Data Says That’s Subpar
Most competitors sell a placement. You pay, you get mentioned in a blog post, you get a mention/link. There’s no intelligence behind which site, which query, or which format.
Not to mention, placement quality varies enormously.
SE Ranking’s study of 129,000 domains found that domain authority matters. Specifically, “sites with over 32,000 referring domains are 3.5x more likely to be cited than those with up to 200 referring domains.”

And that’s intuitive if you think about it from the perspective of how AI retrieval works.
These systems run a search on a search engine like Google (yes, LLMs like ChatGPT have been shown to reference Google SERPs for live retrieval), look at the results, and pull context from what they find.
So, a site that has strong domain authority and ranks well on the SERPs for queries relevant to a user’s prompt will likely be surfaced, while a site that doesn’t rank for anything relevant is invisible, regardless of how good the content is.
So when we built our product, AI Brand Links, we started with the question most competitors are skipping: for the queries a client wants to show for, and the fan-out queries that LLMs use during live retrieval, which sites can actually compete to be in the set of results AI pulls from?
We use DR and organic traffic to help us determine this. Not because those metrics matter to AI directly, but because they can offer us some insight into whether a site has a realistic shot at ranking in the top organic results for a given query (more on this in just a moment), which is about the reach of an AI system scanning search results.
This is also reflected in SE Ranking’s findings. Sites with higher organic traffic correlate with a higher frequency of citations.

So, if a site falls within that window, its content has a chance of being retrieved, evaluated, and cited.
We don’t stop there, though. We also layer in topical relevance. The placement must be on a site that’s topically aligned with fan-out queries, not just on a site publishing a generic post, even if said site has a high DR.
There’s literally little point otherwise, regardless of how much domain authority a site has. If it isn’t topically relevant, it won’t make the cut.
The Retrieval Window and Why DR 70 Doesn’t Always Matter
When someone asks, say, Google’s AI Mode, “what’s the best CRM for small business,” it won’t be able to answer that question by referring to its training data alone, as it either doesn’t know or the information is outdated. For this reason, AI Mode runs a live web search, pulls a set of results, and extracts context from them.
This is reflected in DEJAN’s research.
They showed that for Google’s Gemini-powered AI systems, aka AI Overviews and AI Mode, live retrieval works with a budget of roughly 2,000 words of grounding content per query, allocated across maybe 5-10 sources from the top search results.
What is even more interesting is the fact that, depending on which position a result ranks on the SERPs, an AI system will allocate different amounts of retrieval window budget to it. For example, DEJAN found that the top results get a greater share of the budget. Specifically, “Being the #1 ranked source gets you 2x the grounding compared to being #5.”

Meaning, the dream placement is a page-one ranking post on a DR 70+ site with thousands of monthly visitors. If the placement is at the top of the page, say, within the first 500-ish words, that’s very likely to be within the retrieval window and likely to be surfaced in an AI output.
The thing is that those placements aren’t always feasible at scale. Unfortunately, there’s a little inconvenience we call marketing budgets that need to be considered. You can’t build a strategy around landing only on the highest-authority sites in every vertical because that will get very expensive very quickly.
Thankfully, the data says you don’t need to aim high every single time.
What matters is getting three things right: can the site rank for this query, how competitive is that query, and does the site’s content actually align with the topic? Get those right, a lot of the time, you don’t need DR 70+. You just need a site that’s strong enough to rank for the specific query you’re targeting.
Sure, for competitive head terms (“best project management software,” “best CRM for small business”), there are hundreds of pages competing. The retrieval window is crowded. So, you need a placement on a site with real authority and traffic, something with a DR in the 40-60 range and 5,000+ monthly visitors that already ranks for related terms.
But for long-tail and niche queries (“best CRM for independent insurance agents,” “project management tools for landscape contractors”)? Well, the math changes completely.
There are fewer pages competing for those terms. SO a site with lower authority can realistically be one of the only relevant results that exist, which means it enters the retrieval window by default. Not because it’s powerful, but because nobody else is there.
This is the part none of the competitors have, and it’s one of the things we’re integrating into our service, AI Brand Links.
You see, if you don’t match placement strength to query difficulty, you have a problem: a client buys a placement, submits queries they’d like to be surfaced for, and receives content on a site that technically meets every spec but will never rank high enough for AI to retrieve it.
Technically, everyone did their job, but the product failed regardless. We solved this by building query-tier matching into the process.
Before we source a single placement, we assess the competitive density of the target query and map it to the tier of site that can realistically compete for retrieval. We bake this logic into every order.
Two Layers, Not One
There’s a distinction most people miss, and it changes how we should think about the impact of a placement.
AI answers don’t come from a single monolithic system. They come from two independent components working together, each with its own rules: the retrieval layer and the base model.
Think of the retrieval layer as the “real-time system.”
It’s powered by external search engines and retrieval APIs (Bing, Brave, Google, etc.). Those systems do the crawling, indexing, and ranking. Then, when a user asks a question, the AI search stack calls on these services, they return relevant documents, and the retrieved text is passed into the LLM as context for generation.
Everything discussed earlier in this post, retrieval windows, query-tier matching, and domain strength as a proxy for ranking, applies to this layer.
If a page ranks here, AI can surface it here.
The base model is a different system entirely, though.
The model simply learns correlations. If a brand repeatedly appears in third-party content alongside a topic, the model encodes that as a statistical association. Not because it “likes” the brand, but because that’s what it saw in the data.
Britney Muller (ML/NLP researcher, ex-Moz) framed this perfectly on X:
“LLMs don’t favor anything. They’re not information retrieval systems; they’re next-token predictors. They guess the most statistically likely response based on patterns in training data.”
“The search engine layer bolted on top via RAG? That’s IR (Information Retrieval)!! The base LLM model? Not even close.”
“During training, LLMs process text from across the web, but they don’t log URLs, store sources, or remember where anything came from. What’s left is a frozen statistical snapshot (Gao et al., 2023). Not an index. Not a database.”
“Search engines do the crawling, indexing, and retrieval. LLMs lean on them heavily to surface real-time info (because on their own, they can’t).”

Following Britney’s logic here: a placement on a third-party blog can create a signal in both layers. The retrieval layer indexes the page and can surface it for relevant queries. The base model, if fed this information during training, builds an association between your brand and the topic.
Your own website and third-party websites create a signal primarily in the retrieval layer. If it ranks, AI could find it.
We’ll go deeper on this in the coming weeks. For now, keep this in mind: a good AI visibility strategy creates a signal in both layers. Get on pages that rank (retrieval) and get mentioned in quality content that will become part of the training data (base model). AI Brand Links is designed to do both.
Lastly, Link Attributes Are Irrelevant for AI Visibility
Semrush studied 1,000 domains with Kevin Indig and found that nofollow links have nearly the same impact on AI visibility as dofollow links. The correlations were almost identical. In fact, Gemini and ChatGPT actually slightly favor nofollow sources.

So, in other words, yes, AI systems read the page content, but they don’t evaluate the link attribute. They don’t care whether the link passes PageRank. They care whether the page mentions your brand in a way that answers the query.
I had to double-check this one myself. The entire industry is trained to value dofollow over nofollow. For traditional rankings, it still matters, of course. But for AI visibility, at least based on everything we can measure today, it’s irrelevant.
Where This Leaves You
Your website represents some of the AI visibility opportunity. Although the larger portion is third-party mentions, blog placements, and listicle inclusions.
And remember: Placement quality is determined by the site’s ability to rank for the target query, the content format, and the match between placement strength and query difficulty.
Most services in this space sell a placement that falls short of this.
We, on the other hand, built a process that starts with the query, matches it to the right tier of site, checks topical and format alignment, and gives the content a realistic shot at retrieval. The research said to build it this way. So we did: AI Brand Links service.




