Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways? Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.
Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.
While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.
As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.
It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.
Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.
Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media.
Source: The Conversation
Be the first to comment