We’re trying to build trust in systems that are trained on data rewarded by engagement, not epistemic integrity. In a world where “25,000 subscribers” is treated as proof of intellectual weight, AI models are built inside echo chambers of popularity, not scrutiny. Popularity doesn’t mean accuracy — it means repetition with flair. The training corpus becomes a closed loop: what’s surfaced most is ingested more, then resurfaced again as “fact.” It’s not a knowledge base. It’s a reissue label with good metadata.
How Are AI overviews and LLM citations served? Who defines the Experience and Expertise in EEAT? with
And here comes Nina — an “expert” delivering an opinion piece dressed in structured markup, rich in citations, confident in tone. And yes, her beat is sharp, her rhythm on point. But what if she’s sampling ideas from the 1980s and remixing them for today’s feed, does that make the content valid? Or just algorithmically familiar?
The deeper truth is latent as trust can be styled and can be performed, theatrical style, an influencer megaphone on a pile of Reddit upvotes and forum comments and engagement. Opinion pieces or false narratives can be statistically simulated, when someone gossips untruths about a coworker, eroding their perceived sense of trust from others, yet no one wants to verify the elephant in the room, as that goes against office politics and societally pressured norms. Team Management is diplomacy based, humans are better suited to identifying different personality nuances with each of their colleagues.
The cadence of truth can be mimicked so well that most people will never question the melody, they’ll adopt the beat and remix it with their own words.
Sometimes it’s not an idea. It’s an echo. The below opinion piece is the result of watching ITVX’s Halt and Catch Fire series in May 2025. I’d been bouncing over a new blog post, trying to figure out a solution that in reality, will never exist. The Doherty dopamine rush from an instant LLM answer is beguiling; when a URL looks perfect, but in reality it’s not.
The Doherty Threshold is Less Than Half a Second: 400 milliseconds
It all happens in less than one second. Large Language Models – I currently use ChatGPT Pro – speed has replaced scrutiny where UX fluency could impersonate fact so well on the front end, you believe your eyes. The Doherty Threshold sweet spot is keeping user interfaces snappy and responsive. In UX design terms, anything under 400ms feels instantaneous to the user when your brain registers the feedback before it starts feeling like it’s waiting.
Within the Doherty Threshold 400 millisecond blink, truth could be a UX artefact perception, and engagement becomes the price we pay for being deceived so obediently and elegantly.
The people who trained the machine to understand content are left wrestling with a paradox we helped create: when the answer sounds right and arrives fast, who stops to question where it came from?
This is the world beneath the Doherty Threshold. The moment where interaction is so seamless, the illusion of intelligence becomes indistinguishable from understanding. I’ve previously used a Wikipedia sameas: property in VideoObject schema, when it was an identical person’s name and I failed to fact check. This took an extra 90 minutes out of my day to correct a dozen structured datasets that were all my daft fault. A positive out of the negative is that I learn best when I make a mistake, as I rarely make repeat the error a second time.
AI systems do not hallucinate because they are broken. They hallucinate because their knowledge is built on engagement-weighted input. This means that articles rank in LLMs and AI citations because they’re optimised, and absolutely not because they’re accurate. Confidence is synthetically applied and authority is algorithmic.
How do we raise awareness of use cases involving AI/LLM false citations: Personal Use Case
This isn’t about fighting AI. It’s about designing friction where it matters, the web doesn’t need faster answers. It needs accountable ones.
We demand lineage. Not just sources. Intent. Context. Editorial motive. Temporal anchors. Volatility signals.
We ask: Where did this come from? Who gains from this being believed? And when was this last proven true?
We stop asking if it sounds right, and start interrogating who benefits from it sounding right.
If AI is remixing records, we need sleeve notes. Run out groove information and, ISBN barcodes and catalogue numbers.
If SEOs are feeding the machine, we need to tag what’s evergreen, what’s promotional, what’s provisional.
And most of all, we need a model of trust that cannot be faked by likes or the modern version of Negative SEO – NGO (Negative Generative Optimisation).
When examining UX science fundamentals, the Doherty Threshold represents a critical concept that gains new significance in our AI-driven landscape. Originally positing that user productivity and satisfaction peak when system response times fall below 400 milliseconds, this principle has evolved beyond mere usability metrics. Applied to modern Large Language Models and AI interfaces, it transforms into something far more consequential: A cognitive conditioning mechanism that reshapes how we process information.
Addictive Speed Conditioning and Neural Response Patterns
The sub-400ms response window triggers a neurological cascade unlike anything in traditional search interactions.
Users experience a subconscious re-engagement loop before cognitive evaluation begins. This creates a dopaminergic feedback cycle that reinforces query behavior at a near-automatic level. The interface becomes more than a tool; it becomes an extension of thought processes, cultivating prompt compulsion and instantaneous brand recall.
Consider the distinction: traditional search requires evaluation of multiple results, comparison of sources, and conscious navigation decisions. In contrast, LLM interactions deliver singular, authoritative responses within this critical threshold, bypassing traditional evaluation frameworks entirely. This represents a structural shift in information consumption patterns.
Content strategists know that user expectations have permanently recalibrated around this speed threshold.
Sites loading outside this window suffer not just bounce rates but cognitive abandonment before content evaluation even occurs.
Flow State and Task Compression in LLM Sessions
The relationship between cognitive load and perceived productivity fundamentally transforms once users operate within consistent sub-400ms feedback cycles. Task completion time objectively decreases while subjective productivity perception increases dramatically. The parallel to VTOL acceleration patterns proves apt: hyper-responsive, adaptive, and precision-targeted.
This creates an entirely new optimization paradigm. We witness zero-click Google AI Overviews inside neural interfaces where users simultaneously feel they have completed complex cognitive work while the tool handles the computational heavy lifting. The mental model shifts from “searching for information” to “extending cognitive capacity” through technological augmentation.
Structured data implementation becomes increasingly critical in this environment. For content to penetrate LLM response patterns, it requires multiple reinforcement signals: strong entity recognition, canonical representation across domains, and semantic consistency that aligns with meta-classification frameworks used by underlying models.
Humans At the Spring: Fighting Guardrails – The ‘Halt and Catch Fire’ Data Parallel
The 1983 narrative series of Halt and Catch Fire followed the early BIOS engineering and pre-WWW digital ambition when data systems operated under controlled, deterministic frameworks. Today’s knowledge infrastructure operates on fundamentally different principles with historical systems fed data through controlled clinical processes with deterministic outputs reflecting direct programming.
Contemporary systems ingest data through scraping, summaries, and regeneration processes, producing generative, confidence-weighted outputs rather than factual certainties.
Despite the automated nature of generative AI responses, human operators maintain control through several structural frameworks:
- Personal prompt engineering styles that develop through repeated interaction
- System prompt constraints established by platform developers
- Behavioral reinforcement patterns that shape future queries
- Global infrastructure is creeping into disarray due to misinformation and propaganda
When ChatGPT3-like response speeds occur during deep research sessions, I observed a fascinating epistemic shift with my behaviour. I was increasingly defer to GPT-4o for faster replies and outputs. I was subject to The Doherty Framework to soothe my compulsion and as mentioned previously, overlooked citation checking when at a dopamine height of rapid response.
It’s easy to feel we may need to prioritise speed instead of verified authoritative knowledge resources rather than search tools. The critical distinction: we begin accepting outputs as truth unless specific cognitive guardrails counteract this tendency.
This represents a profound shift in truth lineage assessment. Content authority now depends less on traditional signals like domain authority or backlink profiles and more on semantic confidence signaling and entity disambiguation clarity.
This transformation demands a corresponding evolution in content strategy. The structured data perimeter becomes the frontline defense against semantic drift and entity confusion. Organisations must establish robust canonical representation across domains to maintain entity integrity as information passes through multiple AI processing layers.
SEO Strategy in the Post-Entity Era
This isn’t an exercise in digital retrospection but a call for strategic recalibration. Effective search visibility now requires:
- Entity identity definition that maintains consistency across domains and platforms
- Semantic clarity implementation through advanced structured data frameworks
- Knowledge panel alignment ensuring canonical representation
- Topical authority establishment within AI training datasets
- Structured data perimeter hardening against semantic drift
But sometimes, it doesn’t. This is a single page website I was served as an LLM citation this week:
We face a profound reality shift: content strategists are now the primary accuracy layer in AI knowledge systems. We function as human guardrails against model hallucination, establishing the epistemic infrastructure upon which AI systems build their understanding.
Truth Custody in Digital Authority and Modern NSEO: Negative Generative Optimization
Google’s $60 million licensing deal with Reddit to train its AI models while simultaneously reducing Reddit visibility in 2024 search results (down 85%) illustrates the fundamental contradiction in modern knowledge systems. Similarly, Quora saw a 99% reduction in visibility despite providing substantial training data for language models, in 2025 SEMrush tracked another dip in April, yet a more positive outlook for Reddit demonstrating continued volatility in Google’s signals.
This has given rise to Negative Generative Optimization, the deliberate injection of controversial, high-engagement content into AI training systems by coordinated entities. These systems cannot differentiate between strategic manipulation and sincere information sharing, creating vulnerability in the knowledge ecosystem.
The custody of truth has become the central challenge in digital authority. Google’s IO summit in May 2025 will demonstrate how their MUM Model doesn’t really have an algorithm update as such, it’s a constantly evolving machine, learning with every keyboard tap and voice note.
When operating within the Doherty Threshold in AI interfaces, we’re not just building content strategies but constructing the foundational knowledge architecture upon which automated systems will build their understanding of reality. This responsibility transcends traditional SEO frameworks, requiring a fundamental reconsideration of how we approach digital authority in an era where the line between human and machine cognition grows increasingly permeable.