Study says AI is increasingly creating images of child sexual abuse

DORAL, FL – AI image-generators are helping produce thousands of images of child sexual abuse, according to the Standford Internet Observatory. 

Even though up until recently the threat with these tools had more to do with the fact, they have produced realistic and explicit imagery of fake children in addition to using social media photos of fully clothes real teens to transform them into nudes, now the observatory found more than 3,200 images of suspected child abuse in the AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion.

This new research, reported by the AP, urges companies to creates strategies and actions to address this harmful flaw in this technology that has incredibly grown during the last year. 

According to the media outlet, the watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.

Facing this report, LAION told The Associated Press it was temporarily removing its datasets. The company, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, said in a statement that it “has a zero-tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.”

Although the images found do not represent the entire index of LAION’s 5.8 billion images, the Stanford group says, “it is likely influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.”

David Thiel, the author of the report, said it’s not an easy problem to fix.  “Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention,” Thiel said to the AP. 

The company has indeed made some attempts to filter out “underage” explicit content, but this could have been avoided had they consulted earlier with child safety experts.

Given the fact that cleaning up data retroactively is not an easy task, the Stanford Internet Observatory has called for some measures such as making anyone who’s built training sets off of LAION?5B to “delete them or work with intermediaries to clean the material.” In addition, those involved are urged to effectively make an older version of Stable Diffusion disappear from all but the darkest corners of the internet.

The Stanford report also raises concerns about why any photo of children should be manipulated by AI tools without their family’s consent due to protections in the federal Children’s Online Privacy Protection Act.

Due to how vulnerable children are today in the era of AI, Doral Family Journal encourages parents and adults in general to manage photos of children with extreme caution, trying to reduce the number of photos posted and avoiding showing children’s faces, let alone naked pictures even if there’s not any sexual abuse involved, or the intentions of the poster are benign. 

 

Photo by: Unsplash.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend