The future of video generated by artificial intelligence (AI) is here. OpenAI, the company behind ChatGPT, released a technology called Sora earlier this year. Sora-generated films even aired at the Tribeca Film Festival in mid-June. It's a rapidly evolving industry, and from the halls of Congress to the meeting rooms of tech companies, Americans are trying to figure out how to innovate while ensuring the proper guardrails are in place.
“The speed of this technology is mind-blowing,” said Claire Wardle, a professor and director of the Information Futures Lab at Brown University.
Sora can generate videos up to a minute long, based on prompts typed in by users. The videos have stunned many. “We need to recognize that we're in a very sharp trajectory where this technology is improving, really on a day to day, week to week basis,” Wardle said.The Sora-generated video is not perfect yet — with errors like a person running the wrong way on a treadmill and a mixing spoon appearing out of thin air. But Wardle expects the technology to quickly adapt.
“I think it's going to be not very long before these things really look perfect. And we will get to a point when just our eyes won't be able to figure this out,” she said.
Other companies have been working on similar technologies, and competition has been heating up. A model called Kling out of China has generated particularly impressive clips.
To create videos, AI video generators like Sora are first trained on a huge library of images. When they get a prompt – they draw upon all of the images they’ve seen and create a composite image.
“It's really just copying. It's finding the patterns and it's copying it,” Wardle explained.
Ethical and legal questions surround AI videos. OpenAI has been vague about where it gets training videos from, and YouTube, for one, has said that use of its videos would be a violation of its terms of service.
Wardle said, “We do want high-quality content to be used as a training set of materials, but there should be issues here around payment, consent and other big questions.”
Experts have also been quick to point out this technology’s potential to change the way we work.
“I don't think that we will be losing jobs, but they will be very different jobs,” Wardle said. “I think we'll have new jobs like prompt engineers. How do you design prompts that get good responses back from AI?”
The technology also opens up the potential for misinformation, especially as we head into an election season. While manipulated images aren’t new, AI-generated images bring us into uncharted waters.
“We now know how easy it is for anybody to completely fabricate a video or an image. And so that means that when we see everything now, there's this little kernel of doubt,” Wardle said.
But the last thing Wardle wants — is for people to throw thier hands up in the air.
“If we just give up and we turn our backs on this and we don't trust anything, that's actually one of the aims of bad actors,” she said.
Sora is not available to the public yet — and open AI has said they are quote “red teaming” the model — or assessing it for vulnerabilities and enhancing protections.
As further protection, Wardle stresses the importance of being judicious, by getting news from several trusted news outlets and slowing down before sharing questionable images or videos from social media.
“If you do see anything, always the first thing you should do is go to a search engine, because almost always somebody will have done a very quick fact check to say, ‘no, that's an old photo. It's genuine, but it's from 2016.’ Or ‘no, this has been fact checked. The president didn't say that last week. It's been clipped.’”
As Sora and other AI technologies evolve, Wardle says it’s important to stay vigilant as society learns to navigate this new territory.
“It's really important that we just keep talking to one another, keep checking and evolve and adapt with this new technology and we'll get through this difficult period. But we can't give up,” she said.