In recent months, generative artificial intelligence has exploded in the mainstream, allowing people to create images, audio, and essays or other text using nothing more than an idea and a command.


What You Need To Know

  • The arrival of generative artificial intelligence tools in the mainstream is leading to concerns that the technology could be exploited to push disinformation that might influence elections

  • There have already been glimpses into the ways AI could wreak havoc in elections, including photos showing Donald Trump resisting arrest in New York and deceptive audio of a Chicago mayoral candidate

  • Rep. Ted Lieu, D-Calif., is calling for the creation of a bipartisan commission to issue recommendations on regulating AI

  • Industry experts say that while many tech companies prioritize ethical use of AI and are working on guardrails to prevent misuse, it only takes a small number of bad actors to drive disinformation

But with the groundbreaking technology comes concerns that it could be exploited to push disinformation that might influence elections, including in the 2024 presidential race. 

“People should be concerned about AI in the upcoming election because it's going to blur the line between fake and real,” said Darrell West, a senior fellow with the think tank the Brookings Institution who specializes in technology innovation and governance studies. “Even experts are going to have difficulty distinguishing the false material.”

Stephen Smith, president-elect of the Association for the Advancement of Artificial Intelligence, described the current generative AI landscape as “the Wild West.”

“It's sort of suddenly became available to the masses, and it's producing some results, particularly to the lay person, that look pretty amazing,” he said. 

There have already been glimpses into the ways AI could wreak havoc in elections.

In March, before Donald Trump was criminally charged with falsifying business records, someone posted photos to social media of the former president resisting arrest in New York. The user disclosed that the images were generated by AI, but they were shared by others without context.

And in February, on the eve of Chicago’s mayoral election, a Twitter account posing as a news organization posted deceptive audio of a voice resembling candidate Paul Vallas downplaying killings by police and arguing, “We need to stop defunding the police and start refunding them.” Twitter suspended the account.

To be sure, misinformation has already plagued elections without the help of AI. Notably, the U.S. intelligence community concluded that Russia, leaning heavily on social media, interfered in the 2016 presidential election in an effort to help Trump get elected. And Trump continues to falsely claim there was widespread fraud in the 2020 election, which fueled the deadly insurrection at the U.S. Capitol on Jan. 6, 2021.

But the ability to attach photos, video or audio to false claims can help them take root in more people’s minds, experts say.

“Video imagery is much more powerful than the written word, and people can remember a compelling image much longer,” West said. “And what’s new is the ability to generate completely new videos, which may or may not have happened. And so you can imagine, in a highly polarized political environment, people are going to have big incentives to do things that create problems for their opponent.”

West said he doesn’t believe people in general are prepared to decipher what’s real and what’s not. He also said it’s “almost impossible” to undo the damage after a phony photo, video or audio clip makes the rounds. 

“Once the genie’s out of the bottle, you can't really put it back in,” West said. “And first impressions can matter a lot.”

Complicating matters further, content moderation by social media companies has “really started to dissipate,” West said.

“Some platforms aren’t doing any content moderation, while others are doing very light content moderation,” he said.

Industry experts say that while many tech companies prioritize ethical use of AI and are working on guardrails to prevent misuse, it only takes a small number of bad actors — either profit-chasing companies, nefarious users of the tools or foreign adversaries — to drive disinformation.

Rep. Ted Lieu, D-Calif., is calling for the creation of a bipartisan commission to issue recommendations on regulating AI. 

“There’s a lot that we don't know,” Lieu, who is also a computer programmer, told Spectrum News. “It's only been the second public release of [the AI chatbot] ChatGPT. What does ChatGPT version 12 look like?  Where does AI go two years, four years, six years from now?”

As for elections, Lieu said he thinks “the American public is starting to learn with every passing day that you just shouldn't believe everything you see on the internet.”

In recent weeks, some major players in the tech industry have sounded alarms about AI. 

Last week, Geoffrey Hinton, the computer scientist known as “the godfather of AI,” resigned from Google so he could warn about the dangers of the technology he helped pioneer. 

In a New York Times interview, Hinton said, “It is hard to see how you can prevent the bad actors from using it for bad things.” He added that he is concerned the internet will someday be flooded with false photos, video and text, and the average person won’t “know what is true anymore.”

And in March, more than 1,000  technology leaders and researchers, including SpaceX, Tesla and Twitter CEO Elon Musk and Apple’s Steve Wozniak, wrote an open letter calling for companies to pause for six months development of AI systems more powerful than OpenAI’s latest GPT-4 release.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter said.

Last week, Vice President Kamala Harris hosted a meeting with the heads of Google, Microsoft and two other companies developing AI to discuss how to advance the technology without harming society.

Lieu said one possible solution is laws requiring disclosure of AI-generated content. Some companies, meanwhile, are trying to prevent disinformation by embedding visual or inaudible watermarks into files, and others are developing tools to detect AI-generated material.

Zohaib Ahmed, founder and CEO of Resemble AI, a voice-cloning company, said he welcomes government regulation as long as it does not stunt development. 

“Personally, there's a fear that this stuff can only get kind of worse if we just keep going in this trajectory,” Ahmed said.

Regulations “make sure that everyone is protected — the investors are protected, the consumers are protected, etc.,” he added. “It makes sure that the businesses are functioning in a safe manner.”

Resemble AI prevents users from creating audio that misrepresents others by requiring them to read prompts. But Ahmed said he believes “there are actors out there that … don't have the same boundaries as us.”

He said his company plans to soon release a tool that will detect deepfake audio with 85% accuracy.

There are, however, avenues for campaigns to deploy AI in ways that are not deceptive.

Artificial intelligence technologies, for example, can be used in data collection and data analysis, helping to better tailor candidates’ messages to voters. And voice-cloning services like Resemble AI allow a candidate to produce audio in different languages using their own voice, helping to reach a more diverse audience. 

Immediately after President Joe Biden announced last month that he is seeking reelection, the Republican National Committee released a video ad that used realistic, AI-created images to paint a doomsday picture of what the U.S. might look like if Biden serves another four years. 

It showed China invading Taiwan, banks closing, migrants swarming bridges and armed officers guarding San Francisco after it is closed due to escalating crime. The 30-second ad included a disclaimer saying the images were generated by artificial intelligence.

While the Democratic National Committee criticized the GOP for having to “*make up* images” to attack Biden, Mark DiMassimo, founder and creative chief of the advertising agency DiGo, called it “a very effective ad.” 

“They use the medium that makes people anxious in a way that makes people anxious,” he said.

DiMassimo compared it to what he called “the most powerful political advertising ever” — Lyndon B. Johnson’s 1964 “Daisy” TV commercial, which showed a young girl picking and counting daisy petals, which led into a countdown before a nuclear explosion. 

“Generative AI makes it cheap, easy and fast to create futuristic, surreal nightmare dreamscapes,” DiMassimo said.

-

Facebook Twitter