Module VIII·Article I·~1 min read
Algorithmic Persuasion: Targeting and Microtargeting
The Future of Rhetoric: AI, Deepfakes, and Rhetorical Education
Turn this article into a podcast
Pick voices, format, length — AI generates the audio
From Mass Persuasion to Personalization
Classical rhetoric addressed the audience as a homogeneous group. Modern digital marketing and political targeting use data to create personalized messages—different people are shown different versions of the “speech”.
Cambridge Analytica (2018) — a scandal which revealed the possibilities of psychographic microtargeting. The company used data from Facebook to create psychological profiles of voters and showed each type personalized ads for Trump. “Personalized propaganda” is persuasion perfectly tailored to the recipient’s psychological profile.
This is rhetoric of a fundamentally different type: not “how to address everyone,” but “what to say to this particular person.” This maximizes persuasiveness—and maximizes the risk of manipulation.
Persuasion Technologies Without Information
“Dark patterns” in UX: interface solutions that manipulate behavior without explicit persuasion. “Subscribe” — a bright button, “Unsubscribe” — small gray text. Automatic subscription renewal with a difficult opt-out process. This is “choice architecture” (Thaler, Sunstein), used for manipulative purposes.
“Nudges”: small changes in the choice environment that significantly alter behavior. They can be used for good (the default is participation in the organ donation program; an “opt-out” option instead of “opt-in” sharply increases the number of donors). Or for manipulation (game mechanics for retaining users on social networks).
Question for reflection: Your digital products or services use “choice architecture.” Where is the boundary between helpful “nudging” and manipulation? What ethical standard do you apply to design?
§ Act · what next