Throughout the election season, candidates, voters, and electioneering organizations have used AI for various purposes that may be undetectable to the typical eye. AI-driven technology can manifest in forms ranging from Donald Trump’s AI-generated art to chatbots spreading misinformation about the election.
Over the summer, X’s AI chatbot Grok falsely outputted that in some states, it was too late for Kamala Harris to replace Joe Biden on the ballot. Minnesota Secretary of State Steve Simon and other secretaries of state wrote to X suggesting that the chatbot direct election-related questions to vote.gov instead.
“I don’t see AI as necessarily a new threat, but it is a new means of magnifying an old threat,” Simon said. “Disinformation is old. It’s not just a few years old, it’s centuries old when it comes to elections, and this just makes it a lot easier to do it.”
One of the most prominent forms of AI in politics is deepfake technology: false audio and video strung together by algorithms trained on certain faces or voices. The technology can portray identifiable people doing or saying things that did not happen, easily passing as legitimate.
Richard Painter, a University of Minnesota corporate law professor and former chief White House ethics lawyer, identified concerns about deepfakes in security and accountability.
“What’s at stake here is the integrity of our elections and whether our elections will be free from interference by foreign countries and issues,” Painter said. “The second issue is whether our voters are making decisions based on accurate information or whether an election is simply a contest to see which side can mislead voters the most.”
After the 2010 Citizens United v. Federal Election Commission case, the Supreme Court set a precedent that led to fewer regulations on statements regarding political candidates and where electioneering organizations receive funding.
Painter said that this interpretation of the First Amendment authorizes a combination of deepfake and “unrestrained political spending on electioneering by dark money organizations.”
To combat the spread of disinformation in Minnesota, a law was passed in 2023 allowing certain subjects to gain court intervention if an unconsented deepfake intended to affect an election is widely shared within 90 days of said election.
“Whether [a deepfake] was done by a candidate, a campaign, a third party, a nonprofit, a for-profit, it doesn’t matter. As long as those two things are true, it would come within the law,” Simon said.
The Minnesota law allows courts to require platforms hosting the deepfake to remove it, an important capability as social media dependence on information grows. A recent survey revealed that 39% of adults under 30 rely on TikTok for their news. Consequently, Simon has visited Minnesotan high schools to discuss elections and “the need to verify with trusted sources.”
“The more we can show people what the rules and processes really are, the better we’re going to be and the less susceptible someone will be to influence by AI,” Simon said.
Painter explained that voters depend on accurate information. “If this goes to an extreme, the politicization of truth and departure from the notion that there is an objective truth, this type of trend could end up with losing a democracy,” he said.
This piece was originally published in Zephyrus’ print edition on November 4, 2024
This story was originally published on Zephryus on November 4, 2024.