Retrieval is the first gate, and most content never passes it
In retrieval-augmented answer flows, which is how ChatGPT with browsing, Perplexity, and Google AI Overviews work, content influence starts with source eligibility. If your pages are not retrieved in context for a specific prompt, they do not participate in synthesis at all. You could have the best comparison page in your category, and if the retrieval system does not surface it, it might as well not exist.
Why do models prefer certain sources? It comes down to a handful of signals: domain authority and established trust, publication recency and freshness signals, corroboration across independent sources (multiple sites saying the same thing), content clarity and structure (can the model quickly extract a relevant answer?), and topical relevance to the specific query context. These are not mysterious. They are engineering constraints, and you can optimize for them.
Treat retrieval as a strategic stage with its own optimization logic. Before you spend a sprint rewriting your comparison page copy, verify that the page is actually being retrieved for comparison prompts. If it is not, copy improvements are irrelevant.