Two U.S. senators — one Democrat, one Republican — are calling for content generated by artificial intelligence to be labeled to prevent fraud and the spread of misinformation.


What You Need To Know

  • U.S. Sen. Brian Schatz took to the Senate floor on Tuesday to draw attention to the proliferation of AI-generated videos, images and other content suffusing social media and other platforms

  • Schatz and Sen. John Kennedy, R-La., have introduced legislation that would require clear labels and disclosures for AI-generated content and AI chatbots

  • The Schatz-Kennedy AI Labeling Act would require that developers of generative AI systems include a clear and conspicuous disclosure identifying AI-generated content and AI chatbots

  • It would also require developers and third-party licensees to take reasonable steps to prevent systematic publication of content without disclosures

Sen. Brian Schatz, D-Hawaii, took to the Senate floor on Tuesday to draw attention to the proliferation of AI-generated videos, images and other content suffusing social media and other platforms.

“Seeing is believing, we often say, but that’s not really true anymore,” Schatz said. “Because thanks to artificial intelligence, we’re increasingly encountering fake images, doctored videos and manipulated audio. Whether we’re watching TV, answering the phone, or scrolling through our social media feeds, it has become harder and harder to trust our own eyes and our own ears. The boundaries of reality are becoming blurrier every day.”

Schatz cited a litany of current examples, from doctored images purporting to show an explosion at the Pentagon to advertisements using fake likenesses of celebrities to sell products to phone calls that AI-generate the voice of a family member to fake a kidnapping.

“Deception is not new,” Schatz said. “Fraud is not new. Misinformation is not new. These are all age-old problems. What is new, though, is how quickly and easily someone can deceive or defraud — and do it at staggering scale. With powerful generative AI tools at their fingertips, all con artists need are just a few minutes to spin up a scam or a lie.”

To address the problem, Schatz and Sen. John Kennedy, R-La., have introduced legislation that would require clear labels and disclosures for AI-generated content and AI chatbots.

“People deserve to know whether or not the videos, photos and content they see and read online is real or not,” Schatz said. “Our bill is simple — if any content is made by artificial intelligence, it should be labeled so that people are aware and aren’t fooled or scammed.”

In a news release issued on Tuesday, Kennedy said the measure would set an “AI-based standard” to protect U.S. consumers.

The Schatz-Kennedy AI Labeling Act would:

  • Require that developers of generative AI systems include a clear and conspicuous disclosure identifying AI-generated content and AI chatbots;
  • Make developers and third-party licensees take reasonable steps to prevent systematic publication of content without disclosures; and
  • Establish a working group to create non-binding technical standards so that social media platforms can automatically identify AI-generated content.

“This moment requires us to get serious about legislating proactively, not belatedly reacting to the latest innovation,” Schatz told his Senate colleagues. “Yes, Congress has a lot more to learn about AI, both its opportunities and threats. And yes, there’s no simple answer or single solution for a very, very complex challenge and set of opportunities. But there’s one thing we know to be true right now: people deserve to know if the content they’re encountering was made by a human or not. This isn’t a radical new idea; it is common sense.”

Michael Tsai covers local and state politics for Spectrum News Hawaii. He can be reached at michael.tsai@charter.com.