I lost an argument at Thanksgiving last year. It wasn’t a debate I was unprepared for; I had my facts ready. The topic was a politician’s recent gaffe, and my uncle was insisting it never happened, a fiction invented by the media. I pulled out my phone and played the video from the Associated Press. The footage was clear. The source was impeccable. The words were undeniable. I looked up, expecting, if not an apology, at least a grudging concession.
He shook his head. “You can’t trust that,” he said, his voice layered with a kind of weary wisdom. “That’s probably one of those deepfakes. They can make anyone say anything now.”
In that moment, the argument was over. Not because I had lost, but because the foundation for a shared reality had crumbled beneath us. My evidence, my proof, was irrelevant. The mere possibility of a fake had become more powerful than the authenticated truth in my hands.
This quiet moment of conversational collapse is not unique to my family. It is a scene playing out in miniature across the country, in courtrooms, on campaign trails, and in newsrooms. The technologies of synthetic media have handed a devastatingly effective tool to those who wish to evade accountability, but the true danger is not the technology itself. It is the corrosive public skepticism the technology creates; something scholars have termed the “liar’s dividend.”
This is the profit reaped when truth becomes too difficult to verify and reality itself is cast as a matter of opinion. The proliferation of AI is merely the latest accelerant in a crisis of trust that began long before, with the decentralization of our media and the weaponization of “fake news.” To defend our democracy’s epistemic foundations, we must understand the behavioral mechanics of this dividend and build a robust, multi-layered defense in our companies, in our institutions, and in ourselves.
From Broadcast to Noise
The scene at the Thanksgiving table would have been unthinkable fifty years ago, not because deepfakes didn’t exist, but because the concept of a shared, verifiable reality was largely taken for granted. In 1976, a Gallup poll found that an astonishing 72% of Americans had a “great deal” or “fair amount” of trust in the mass media. In an era dominated by a few television networks and major newspapers, figures like Walter Cronkite of CBS News, often cited as “the most trusted man in America”, served as powerful institutional gatekeepers. They delivered the news, from the Vietnam War to the moon landing, to a mass audience that consumed the same set of core facts. While Americans certainly disagreed on politics and solutions, they were, for the most part, arguing from a common playbook of reality.
The launch of CNN in 1980 and, more pointedly, Fox News in 1996, began the great fragmentation of the American audience. The business model of news shifted. Instead of broadcasting to the widest possible center, cable channels discovered it was more profitable to “narrowcast” to dedicated, partisan niches. The news became a constant, flowing stream, increasingly supplemented with opinion-as-news to keep audiences engaged and loyal. We began sorting ourselves into different information silos, and for the first time, large segments of the population were no longer operating from the same playbook.
The rise of the blogosphere in the early 2000s was a revolution in disintermediation; suddenly, anyone with a keyboard could be a publisher, reaching a potential audience of millions without the filter of an editor or the need for a printing press. This digital democratization of voice challenged institutional authority and broke open vital stories, but it also flooded the ecosystem with conjecture, conspiracy, and unvetted claims. The professional journalist, once a clear gatekeeper, was now just one voice shouting in a crowded digital marketplace. Discerning signal from noise became a full-time job for the average citizen, a job few had the time or training to do.
The final, decisive blow came when social media became the primary arena for our information lives. Platforms like Facebook and Twitter (X I guess) did not just accelerate the spread of information; their core algorithms actively shaped what we saw. These systems are designed for one purpose: engagement. And nothing is more engaging than content that triggers strong emotions: outrage, validation, fear, and tribal identity. In this environment, the term “fake news,” which once described literal hoaxes, was brilliantly and cynically weaponized. Around 2016, it was transformed into a political cudgel, used to dismiss any reporting, no matter how credible, that was critical or inconvenient. It gave millions of people a simple, powerful phrase to delegitimize any fact they didn’t like.
And so the ground was perfectly prepared. By 2024, that 72% trust in the media had collapsed to a historic low of 32%. Decades of fragmentation, decentralization, and deliberate weaponization had cultivated a deep, pervasive skepticism in the public. This is the depleted soil in which the liar’s dividend now grows so easily. The dismissal of a real video as a “deepfake” is not a sudden madness; it is the logical, tragic endpoint of this long decline.
Deconstructing the Devaluation of Truth
The depleted soil of public trust provides the perfect strategic opportunity for what scholars Josh A. Goldstein and Andrew Lohn, in their work for the Brennan Center for Justice, have termed the “liar’s dividend.” The concept is as brilliant as it is corrosive. The dividend is not the benefit a liar gets from a successful deepfake fooling the public; it is the benefit they get from the public’s awareness that deepfakes exist. It is the power to dismiss any real, inconvenient piece of evidence like an audio recording, a video, or a photograph, as a sophisticated fake, and to be believed, or at least to inject enough doubt to muddy the waters into inaction. It transforms the very technology meant to enhance reality capture into a tool for reality denial.
To understand how this dividend is collected, we have to analyze the strategic toolkit it offers to a bad actor. The first variable is the messenger: who delivers the lie? This choice exists on a spectrum of risk and reward. At one end, a political candidate can make a direct, high-impact denial themselves. This garners maximum attention but also carries the maximum risk of backlash if the lie is definitively proven. To mitigate this, the lie can be delegated to an official proxy, like a campaign manager, who offers a degree of separation. For even greater plausible deniability, the claim can be laundered through an unaffiliated proxy; a friendly pundit, a sympathetic media outlet, or an anonymous online account – sacrificing the impact of a personal denial for near-total insulation from accountability.
The second variable is the message itself: how direct is the lie? The most straightforward tactic is a direct claim: “That video of me is a deepfake.” It is a clear, falsifiable assertion. But a far more insidious and often more effective strategy is the indirect claim, which aims not to debunk a specific piece of evidence but to foster a general “informational uncertainty.” This is the world of vague dismissals (“You just can’t trust what you see these days”), oppositional rallying (“The media will do anything to make us look bad”), and whataboutism. This indirect approach poisons the entire well of information. It persuades citizens that discerning truth from fiction is a hopeless task, encouraging them to retreat into the safety of their pre-existing beliefs and partisan loyalties.
This two-axis framework, of messenger and message, provides a flexible and powerful toolkit for any individual or group seeking to escape accountability. They can tailor their strategy based on the severity of the incriminating evidence and their risk appetite. By understanding these mechanics, we can see the liar’s dividend for what it is: not just a simple lie, but a calculated, multifaceted assault on the very concept of verifiable evidence. The question then becomes, why are our own minds so susceptible to this assault?
Why We Are So Susceptible
The power of the liar’s dividend is not rooted in the sophistication of AI. It is rooted in the architecture of the human brain, which is not optimized for discerning objective truth, but for survival, social cohesion, and the conservation of mental energy. These ancient priorities make us profoundly vulnerable to modern informational warfare. The liar’s dividend is effective because it offers us an easy, comfortable, and psychologically satisfying escape from difficult realities.
The primary vulnerability it exploits is our intense aversion to cognitive dissonance, the mental stress we feel when holding two conflicting beliefs simultaneously. Imagine you believe your preferred candidate is a fundamentally decent person. Then, a video emerges showing them saying something cruel. This creates a painful dissonance. To resolve it, you can either engage in the difficult process of updating your entire view of the candidate or you can discard the offending piece of evidence. The liar’s dividend provides a perfect tool for the latter. The “deepfake” explanation allows you to resolve the dissonance instantly, not by changing your mind, but by invalidating the evidence. This isn’t just intellectual dishonesty; it’s a form of psychological self-preservation.
This is amplified by the powerful force of motivated reasoning. We do not process information like impartial judges; we process it like lawyers defending a client we are already committed to. Our client is our own set of pre-existing beliefs and tribal loyalties. When confronted with inconvenient evidence, we don’t ask, “Is this true?” We ask, “Must I believe this?” The deepfake defense allows the answer to be a resounding “no.” It feeds our confirmation bias, our natural tendency to embrace information that supports our team and reject information that challenges it. In an age where our social media feeds are algorithmically tuned to create personalized echo chambers, this effect is supercharged. Inconvenient truths feel like hostile intrusions into a reality that has been custom-built to comfort us.
Finally, the liar’s dividend preys on our brain’s fundamental laziness. Our minds operate on a principle of cognitive ease, constantly seeking the simplest possible path to a conclusion to conserve energy. It is metabolically expensive to question our own beliefs, fact-check a dubious claim, or live with uncertainty. It is cheap and easy to accept a simple, all-purpose dismissal. The claim that “you can’t trust anything” is appealing not just because it’s cynical, but because it’s simple. It relieves us of the burdensome responsibility of critical thought. In an era of crushing information overload, offering people a simple way out is the most powerful persuasion tactic of all.
An Arsenal of Imperfect Weapons
Diagnosing an illness is not the same as curing it. To combat the liar’s dividend requires a conscious and sustained counter-offensive, fought not with a single silver bullet, but with an arsenal of tools, habits, and responsibilities. The work belongs to everyone—the individual citizen, the corporations that shape our digital world, and the public institutions that form the bedrock of society.
What Individuals Must Do: The Practice of Discernment
The first line of defense is the individual mind. This requires moving beyond passive “media literacy” to a more active posture of intellectual self-defense. The most crucial habit is to practice emotional skepticism: when a piece of content makes you feel a strong surge of outrage, validation, or fear, pause. That emotional spike is a biological alarm bell, signaling that you are being targeted for manipulation. Before you share, practice lateral reading: open a new browser tab and spend two minutes searching for the claim or the source. See what other, independent outlets are saying. This simple act of “informational hygiene” is the single most powerful thing a citizen can do to stop the spread of lies. Resisting the urge to instantly share unvetted information is no longer just a matter of personal etiquette; it is a fundamental civic duty.
What Companies Must Do: The Responsibility of the Platform
The corporations that host our digital public square have a profound responsibility to architect for trust. For social media platforms, this means deliberately engineering “friction.” Instead of optimizing for seamless, instantaneous sharing, they should introduce pop-ups that ask, “Are you sure you want to share this article you haven’t opened?” or flag content from unverified sources. They must also move beyond half-measures and universally adopt and enforce clear, consistent labels for synthetic media and known disinformation outlets. For the companies developing AI, the work must begin at creation. They must bake in robust, open-source watermarking and content provenance standards, like the C2PA standard, into their models from the ground up. Making these tools proprietary or paywalled is an abdication of responsibility.
What Institutions Must Do: Rebuilding the Foundation
Our foundational institutions must undertake the slow, generational work of rebuilding our collective defenses. In education, digital citizenship and critical source analysis cannot be a single lesson; they must be a core competency woven into every subject from middle school onward. Our government and civil society leaders must establish and enforce clear, cross-party norms that create real political costs for candidates who knowingly profit from the liar’s dividend. This could take the form of public pledges, withdrawal of funding, or formal censure. Finally, we must reinvest in the institutions designed to create shared knowledge: public media, libraries, and independent, local journalism. These entities provide a crucial, non-partisan baseline of reality that can serve as an anchor in a sea of digital noise. The liar’s dividend thrives in a vacuum of trusted authorities; we must work to refill that vacuum.
Paying the Dividend
We have traveled a long and troubling road: from the high-water mark of shared facts to the fragmented noise of today, from the cynical mechanics of the liar’s dividend to the deep-seated cognitive biases that make us such willing participants. We have seen that this crisis is not the fault of any single technology, but the result of a decades-long erosion of institutional trust, supercharged by platforms that reward emotion over evidence. And while we have laid out an arsenal of potential weapons for this fight, in our habits, our corporate architectures, and our civic institutions, the choice to wield them remains ours.
I think back to that Thanksgiving table. The stalemate was not about a specific fact, but about the very possibility of facts. My uncle’s casual dismissal of a verifiable video was the final payment of the liar’s dividend, the moment where the exhausting work of discernment is abandoned in favor of the simple comfort of disbelief. His argument was the culmination of a system that has taught us that the truth is too difficult to find, that all sources are equally biased, and that trusting our tribe is a safer bet than trusting our eyes.
That quiet, helpless moment is the future on a small scale. It is a world where accountability becomes impossible because evidence has lost its meaning. It is a democracy where deliberation decays into a shouting match between alternate realities, and power flows not to the most competent or principled, but to the most shameless. This is the ultimate price of the liar’s dividend.
Defending our shared reality is now the central, defining challenge of our era. It is exhausting, difficult, and often thankless work. But the alternative, a world where truth is merely a partisan opinion and every citizen is an island of their own belief, is no world at all. We must choose to pay the cost of discernment, because the cost of disbelief is one we can no longer afford.