Imagination is the only limitation of artificial intelligence. Punch in what you want, and out pops a masterpiece. But the useful technology is also being used maliciously, and as AI grows more intelligent, calls for tighter regulations are growing louder.

“Regulation takes time. We need to be much faster to prevent it from getting worse,” said Dan Sexton, chief technology officer at the Internet Watch Foundation (IWF).

Based in England, IWF is one of three organizations worldwide tasked with actively searching for child sexual abuse images on the internet. It's been doing it for the past 27 years.

More recently, AI caught the organization off guard because of how quickly the technology is evolving.


What You Need To Know

  • AI is getting more intelligent

  • Images of sexual abuse created by AI are flooding the internet

  • More regulations are needed to control AI-generated sex abuse images of children on the web

“I’m afraid that problem has gotten more difficult and more widespread since then,” Sexton said. “And now we’re getting imagery we know had been generated by a computer. Our expert analysts are saying if they found this in the wild and weren't aware it was made by a computer, they wouldn’t necessarily know.”

IWF released disturbing details of its research study in late October.

It found more than 20,000 AI-generated images posted to a dark web forum in just one month. Of those, more than half were considered criminal under UK law.

“When people say, 'is there anything preventing someone from taking images of my children off the internet and using it to create sexual images of children?' I’m afraid the answer at the moment is there isn’t,” Sexton added.

In many cases and in some countries, child sexual abuse images created by AI may not be illegal, or its legality is murky at best. IWF said any ambiguity in the legal part of the issue is detrimental.

The company said it needs to be very clear that any image, where it’s created by AI or sourced from a real victim, is illegal under any circumstance.

Companies that create AI software are rushing to implement safeguards to prevent the generation of photorealistic images of children in graphic situations.

But Sexton said there’s open software out there, and abusers are finding ways to bypass locked systems.

“What we’ve also seen is technically competent perpetrators and offenders specifically creating and training their own models, often on real images of child sexual abuse for the specific purpose of creating sexual images of children,” Sexton said.