Module VIII·Article II·~1 min read

Deepfakes and the Crisis of Rhetorical Trust

The Future of Rhetoric: AI, Deepfakes, and Rhetorical Education

Turn this article into a podcast

Pick voices, format, length — AI generates the audio

When Seeing Ceases to Be Believing

"Seeing is believing"—a Western epistemological presumption. Video evidence is the most convincing. Deepfakes attack this presumption: it is now possible to create a convincing video in which a real person says something they never actually said.

Deepfake technologies (GAN — Generative Adversarial Networks) allow faces to be superimposed onto other bodies, voices to be synthesized, and fully generative "videos" to be created. The quality is growing exponentially: deepfakes from 2024 are indistinguishable from real videos without special analysis.

Rhetorical consequences: "reduction of plausible denial"—now any real video of compromising behavior can be declared a deepfake. This is the "paradox of evidence": the technology simultaneously allows the creation of fakes and makes trust in the genuine impossible.

"Infocalypse" and the Restoration of Trust

Renée DiResta introduced the term "infocalypse": a media environment so saturated with narratives and counter-narratives that achieving consensus about basic reality becomes impossible. This is not "the end of truth"—it is "the end of shared truth".

Mechanisms for restoring trust. Cryptographic verification: digital signatures for videos and documents that confirm their source. "Content credentials"—metadata embedded in media files. "Prebunking" (pre-refutation): teaching skills to recognize manipulations before encountering them.

Media literacy as rhetorical self-defense: the ability to ask questions about the source, motive, and evidence.

Question for reflection: How will you verify video evidence in a world where deepfakes are indistinguishable? What new standards of trust are needed for professional and public life?

§ Act · what next