Your SEO is working. Your AI search visibility probably isn’t.

May 5, 2026
Alexander Bleeker
Alexander Bleeker
Senior Director of Brand and Content

Maximize Your Marketing ROI

Join 10,000 other marketers already getting the best tips on running engaging events that boost pipeline and create raving fans.

Malte Landwehr has spent twenty-plus years at the intersection of search, product, and marketing. He’s now CPO and CMO of Peec AI, a platform helping over 1,700 brands measure and improve their visibility in AI search. He joined an AI Marketing Alliance workshop hosted on Goldcast to walk through the data on what’s actually happening with AI search, what content LLMs cite, and how marketing teams should think about optimizing for it.

The session was dense with data. Here’s what matters.

Google clicks are declining. LLM clicks barely exist.

Google’s zero-click rate, which hovered around 60% for years, has climbed to 65–70% since the rollout of AI Overviews. In Google AI Mode, where users get a full chat-style answer, the zero-click rate jumps to 95%. In ChatGPT, it sits at roughly 99%.

For most brands, ChatGPT traffic right now is maybe half a percent to 2% of total visits. Measured by clicks alone, it barely registers.

But clicks are the wrong metric.

An Accenture and eMarketer study found that generative AI is now the second most trusted source for purchase recommendations, behind only physical store salespeople. Ahead of social media, search engines, and brand websites. A G2 study on B2B software buying found GenAI is the number one tool buyers use to build their shortlist. And Google’s own data shows almost every B2B SaaS purchase is now influenced by LLMs at some point in the buyer journey.

The decisions are being made inside the chat. The click just never happens.

The “dark chat” attribution gap

Malte shared a client example that made this concrete. An agency looked at their web analytics (HubSpot, in this case) and found 0% of leads attributed to ChatGPT. Zero. Click-based attribution saw nothing.

So they added a self-reported attribution question: “Where did you hear about us?” The answers came back 21% ChatGPT and 2% Perplexity. A combined 23% of their inbound was influenced by AI tools that their analytics couldn’t see.

Malte called this “dark chat,” the same blind spot as the “dark social” problem from a decade ago when Facebook’s mobile app stopped passing referral data. Most marketing teams are undervaluing AI search as a demand source because their analytics literally can’t see it.

SEO is the foundation. But it isn’t enough.

One of Malte’s case studies showed a financial services company that dominated Google and Bing rankings for their niche. Strong SEO across the board. But in Perplexity, they had 0% visibility for a key set of prompts.

The reason: their website included a list of five recommended competitors with a headline framing those competitors as trusted alternatives. Perplexity found that structured list, treated it as the most relevant text chunk, and used it to recommend the competition. Their own site was the most-cited source in answers that never mentioned them.

The fix took one day. They changed the headline and added themselves to position one on the list. The next day, they were the most visible brand in Perplexity for that topic.

SEO got the page crawled and cited. But the way the content was structured determined who got recommended.

What LLMs actually cite

Malte broke down what content formats get pulled into LLM answers, based on Peec AI’s tracking across thousands of prompts.

Summaries at the top of pages. LLMs pull text chunks, not full documents. A clean summary at the top of an article or landing page makes it far more likely to be selected. Write the takeaway, not a description of the page.

Questions and answers. Pages with Q&A formatting see higher citation rates. The density of the format forces concise, citable text.

Definitive statements. If your text says “the Statue of Liberty is located in New York,” that’s a citable chunk. If it says “I think the Statue of Liberty is in New York, if I remember correctly,” it isn’t. LLMs prefer authority.

Entity-dense text. Text with named entities and clear relationships between them gets cited more. Dense information signals reliability.

Listicles. Self-promotional “best of” lists that put the author’s brand at number one still work in Google AI Overviews and Perplexity, though ChatGPT has started discounting them. A German SEO agency even proved that listicles can make LLMs recommend a completely made-up brand (Matcha Teos, a fake matcha powder). The format is that powerful.

Comparison pages. “Brand A vs. Brand B” articles published on Brand A’s own site get pulled into AI Mode answers with high frequency. Some companies, like Rippling, publish dozens of competitor-versus-competitor comparisons on their own domain.

Off-site tactics that work

When the LLM sources for your target prompts are mostly editorial sites and publishers, Malte outlined three off-site plays: traditional digital PR to get mentioned on the publications that LLMs already cite, advertorials on those same publishers (in one insurance niche he tracks, over 2% of LLM sources are paid advertorial placements), and buying affiliate or listicle positions outright.

For prompts where user-generated content dominates the sources, the work shifts to your social, brand, and influencer teams. YouTube and Reddit are common sources in US-focused prompts, but this varies by language and market. Don’t shift your entire content strategy because one platform had a spike in citations last month. Look at top sources across LLMs over longer time frames.

How to measure AI search visibility

Malte recommended three methods, used together:

Self-reported attribution. Add “Where did you hear about us?” to your lead forms. It’s noisy, but it catches the dark chat signal that analytics miss entirely.

Server log analysis. ChatGPT’s GPT-user bot shows up in your server logs when a user prompt triggers a visit to your page. This tells you which pages are being used for grounding, and how often.

Prompt tracking. Define a set of topics, personas, and funnel stages. Write prompts that match those combinations. Track how often your brand and website are mentioned and cited across LLMs, benchmarked against competitors. This is the closest thing to a share-of-voice metric for AI search.

None of this gives you the clean attribution of performance marketing. Malte’s advice: think about GEO like brand marketing. Measure directional signals and act on them.

Optimize your GEO playbook

See how Goldcast captures the engagement data that feeds your AI search and content strategy

FAQs

What is GEO (generative engine optimization)?

GEO is the practice of optimizing your brand and content to be visible in AI-generated search results, including ChatGPT, Perplexity, Google AI Overviews, and Google AI Mode. It is also referred to as AEO (answer engine optimization) or AI SEO. The goal is to increase how often your brand is mentioned or recommended in LLM answers, not just ranked in traditional search results.

Is GEO different from SEO?

SEO is the foundation for GEO, but it is not sufficient on its own. A site can rank well in Google and still be invisible in AI search results. LLMs don’t just look at rankings. They pull specific text chunks from pages and decide which brands to recommend based on how content is structured, how authoritative the statements are, and whether named entities are present. Optimizing for LLM citations requires additional work on content formatting, entity density, and off-site presence.

How much traffic do LLMs actually send to websites?

Very little. ChatGPT’s zero-click rate is approximately 99%, meaning only about 1 in 100 prompts results in someone clicking through to a website. Google AI Mode sits around 95%. However, the traffic that does come through converts significantly better than organic search traffic, with studies showing conversion rates 4 to 20 times higher.

What content formats do LLMs prefer to cite?

LLMs favor concise summaries at the top of pages, Q&A formatted content, definitive (non-hedged) statements, text with high entity density, listicles, and comparison pages. Charts and tables should include a written summary of the key takeaway, since LLMs process text chunks rather than rendering visual elements.

How should marketing teams measure their AI search visibility?

Use three methods together: self-reported attribution on lead forms to capture dark chat influence, server log analysis to see which pages LLM bots are visiting, and prompt tracking to monitor how often your brand appears in LLM answers for your target topics compared to competitors.

Transform Your Video Marketing with AI

Demo Goldcast today →

© 2026 Copyright Goldcast, Inc. All rights reserved.

YOUR PRIVACY CHOICES