close
close
Sun. Sep 8th, 2024

San Francisco sues AI ‘undressing’ websites over non-consensual nude images

San Francisco sues AI ‘undressing’ websites over non-consensual nude images

TLDR

  • San Francisco’s city attorney is suing 16 websites that use artificial intelligence to create non-consensual images of nude women and girls.
  • The sites were visited 200 million times in the first half of 2024
  • Some sites allow users to create child pornography
  • Victims have difficulty removing these AI-generated images once shared online
  • The lawsuit seeks $2,500 for each violation and seeks to shut down the sites

San Francisco City Attorney David Chiu has filed a lawsuit against 16 websites that use artificial intelligence (AI) to create non-consensual images of nude women and girls.

The lawsuit, filed in San Francisco Superior Court, targets sites that allow users to “undress” or “nude” people in photos without their consent.

According to Chiu’s office, these websites received more than 200 million visits in the first six months of 2024 alone. The lawsuit alleges that the site’s owners include individuals and companies in Los Angeles, New Mexico, the United Kingdom and Estonia .

The AI ​​models used by these sites are trained on pornographic images and child sexual abuse material. Users can upload a picture of their target and the AI ​​generates a realistic, pornographic version. While some sites claim to limit their service to adults only, others allow the creation of images of children as well.

“This investigation has taken us into the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation,” Chiu said.

He pointed out that while AI has “tremendous promise”, criminals are exploiting the technology for abusive purposes.

The lawsuit claims that these AI-generated images are “virtually indistinguishable” from real photos. They have been used to “extort, intimidate, threaten and humiliate women and girls”, many of whom have no way to control or remove the fake images once they have been created and shared online.

In a troubling incident highlighted by Chiu’s office, AI-generated nude images of 16 eighth-graders were shared with students at a California middle school in February. The students targeted were usually between 13 and 14 years old.

The lawsuit seeks those sites to pay $2,500 for each violation and cease operations. It also calls on domain name registrars, web hosts and payment processors to stop providing services to companies that create these AI-generated deepfakes.

The rapid spread of what experts call non-consensual intimate images (NCII) has prompted efforts by governments and organizations around the world to address the issue. The use of artificial intelligence to generate child sexual abuse material (CSAM) is of particular concern as it complicates efforts to identify and protect real victims.

The Internet Watch Foundation, which monitors online child exploitation, has warned that known pedophile groups are already embracing the technology. There are fears that AI-generated CSAM could overwhelm the internet, making it harder to find and remove genuine abuse material.

In response to these growing concerns, some jurisdictions are taking legislative action. For example, a Louisiana state law specifically banning AI-created CSAM went into effect this month.

Major tech companies have pledged to prioritize children’s safety as they develop AI technologies. However, Stanford University researchers found that some AI-generated CSAMs have already made their way into the datasets used to train AI models, highlighting the complexity of the problem.

Related Post