Technology executives and senators from both major parties were in agreement Tuesday that artificial intelligence poses serious risks that require government regulation. But what that regulation might look like was up for debate.


What You Need To Know

  • Technology executives and senators from both major parties were in agreement Tuesday that artificial intelligence poses serious risks that require government regulation, but what that regulation might look like was up for debate

  • In a hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law, lawmakers pressed OpenAI's CEO and IBM's chief privacy and trust officer on what role they’d like to see the government play in ensuring AI doesn’t spiral out of control

  • While AI offers the promise of making scientific breakthroughs, it also presents harms, including disinformation and impersonation fraud, said Sen. Richard Blumenthal, the panel's chairman

  • Lawmakers repeatedly said Congress had failed to address the dangers posed by social media in its infancy and vowed not to make the same mistakes

In a hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law, senators pressed Sam Altman, the CEO of OpenAI — the company behind the powerful ChatGPT chat bot — and Christina Montgomery, IBM’s chief privacy and trust officer, on what role they’d like to see the government play in ensuring the burgeoning, transformative technology doesn’t spiral out of control.

Meanwhile, lawmakers peppered the execs with questions about the impact artificial intelligence tools might have on the job market, disinformation — including in the 2024 election — and other ways of life.

Sen. Richard Blumenthal, D-Conn., the subcommittee’s chairman, began the hearing by playing audio of his voice reading an introduction to the hearing. The speech’s script was written by ChatGPT and the audio was produced using voice-cloning software trained with his Senate floor speeches. 

Speaking himself moments later, Blumenthal quoted the AI’s brief speech: “This is not necessarily the future that we want.”

“The underlying advancements of this era are more than just research experiments,” Blumenthal added, in his remarks of his own. “They are no longer fantasies of science fiction. They are real and present.”

While AI offers the promise of making scientific breakthroughs, it also presents harms, including disinformation and impersonation fraud, Blumenthal said.

Sen. Josh Hawley, R-Mo., the ranking Republican member of the panel, questioned whether AI will prove to be an innovation that positively impacts society, like the printing press, or one that harms it, like the atom bomb.

“I don't know the answer to that question,” he said. “I don't think any of us in the room know the answer to that question because I think the answer has not yet been written.”

Lawmakers repeatedly said Congress had failed to address the dangers posed by social media in its infancy and vowed not to make the same mistakes, namely shielding AI companies from lawsuits. 

Altman said he believes the tools developed and deployed by OpenAI “vastly outweigh the risks, but ensuring their safety is vital to our work.”

He added that OpenAI believes “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”

Altman said he supports the creation of a new agency, but Montgomery said she is opposed to the idea, arguing that setting up an agency would “slow down regulation to address real risks right now.” 

Montgomery said she thinks existing regulatory agencies could address the issues, although Sen. Chris Coons, D-Del., argued those agencies lack the needed resources and powers.

Altman suggested a combination of licensing new AI tools and testing them using independent experts. 

Montgomery said she favors licensing for only “high-risk” AI tools and called on the government to clearly define what is considered high risk. She also recommended greater transparency around AI, including ensuring users know they are interacting with a computer and not a human. 

Gary Marcus, who founded the companies Robust.AI and Geometric.AI, which were later sold to Uber, said artificial intelligence developers have “built machines that are like bulls in a china shop — powerful, reckless and difficult to control.”

He painted a chilling picture of AI, saying systems can feed users persuasive lies, be used to interfere in elections and give harmful medical advice. 

“We all more or less agree on the values we would like for our AI systems to honor,” said Marcus. “We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias and, above all else, to be safe. The current systems are not in line with these values.”

Marcus, too, called for independent reviews similar to how the Food and Drug Administration approves medication.

Knowing AI-generated content could originate abroad, Marcus and Altman also endorsed the idea of the U.S. taking a leading role in setting international standards. 

Blumenthal said perhaps his “biggest nightmare” is that artificial intelligence will push many people out of their jobs. 

Altman acknowledged there will be a “significant impact” on jobs but predicted “there will be far greater jobs on the other side of this and that the jobs of today will get better.”

“As our quality of life raises and as machines and tools that we create can help us live better lives, the bar raises for what we do and our human ability, and we spend our time going after more ambitious, more satisfying projects,” he said. 

Montgomery said AI is “going to change every jobs.”

“New jobs will be created, many more jobs will be transformed, and some jobs will transition away,” she said.

She said the country should be preparing their workforce for partnering with AI technologies. 

Altman also said OpenAI is concerned about the impact AI-generated disinformation can have on elections, noting it often spreads through social media.

He said there are prompts from users the technology can reject and that the company can detect if a user is generating a lot of misleading social media posts.

Sen. Marsha Blackburn, R-Tenn., questioned Altman on OpenAI’s Jukebox, which can create songs in the style of a certain artists using voices that sound like them. 

“I think we have the best creative community on the face of the earth. They're in Tennessee,” she said. “And they should be able to decide if their copyrighted songs … are going to be used to train these models.”

Altman said OpenAI is consulting with musicians and visual artists about rights and compensation, adding that he believes content creators should benefit from the technology. He also said he thinks  artists should be able to opt out of having their works used to train  systems.

-

Facebook Twitter