NEW YORK — Pornographic deepfake images of Taylor Swift are circulating online, making the singer the most famous victim of a scourge that tech platforms and anti-harassment groups are trying to solve.
Sexually explicit and abusive fake images of Swift began to spread widely on social media platform X this week.
“Swifties'” fervent fan base quickly mobilized and launched a counterattack on the platform formerly known as Twitter, launching the #ProtectTaylorSwift hashtag to flood the platform with more positive images of the pop star. Some said they reported accounts sharing deepfakes.
Deepfake detection group Reality Defender said it was tracking a large amount of non-consensual pornographic material depicting Swift, particularly X. Some images also made their way to Facebook and other social media platforms owned by Meta.
“Unfortunately, they spread to millions of users before some were removed,” said Mason Allen, chief growth officer at Reality Defender.
Researchers have found at least several dozen unique images created by artificial intelligence. The most widely shared were football related; It showed a painted or bloodied Swift objectifying herself and, in some cases, causing violent damage to her deepfake persona.
The number of open deepfakes has increased in the past few years as the technology used to produce such images has become more accessible and easier to use, the researchers said. A report published in 2019 by AI firm DeepTrace Labs showed that these images were largely used as weapons against women. It was stated that most of the victims were Hollywood actors and South Korean K-pop singers.
Brittany Spanos, a senior writer at Rolling Stone and who teaches a course on Swift at New York University, says Swift’s fans, especially those who take their fandom very seriously, are quick to step up to support their artist in cases of misbehavior.
“If he actually goes to court, this could be a big deal,” he said.
Spanos said the deepfake pornography issue dovetails with other issues Swift has had in the past, pointing to a lawsuit she filed in 2017 against a radio station DJ who allegedly harassed her; Jurors awarded Swift $1 in damages; a sum that her lawyer, Douglas Baldridge, described as “a single symbolic dollar whose value to all women in this situation is immeasurable” in the midst of the MeToo movement. (The $1 lawsuit later became a trend, as did Gwyneth Paltrow’s countersuit against a skier in 2023.)
When reached for comment about Swift’s fake images, X directed The Associated Press to a post from her security account stating that the company strictly prohibits the sharing of non-consensual nude images on its platform. The company has also sharply reduced its content moderation teams since Elon Musk took over the platform in 2022.
“Our teams are actively removing all images detected and taking appropriate action against the accounts responsible for posting them,” the company wrote in an X post early Friday morning. “We are monitoring the situation closely to ensure that any further violations are addressed immediately and the content is removed.”
Meanwhile, Meta said in a statement that it strongly condemned “the content that appeared on different internet services” and was working to have it removed.
“We continue to monitor our platforms for this infringing content and will take appropriate action where necessary,” the company said.
A representative for Swift did not immediately respond to a request for comment Friday.
Allen said the researchers are 90% sure that the images were created by diffusion models, a type of generative artificial intelligence model that can generate new, photorealistic images from written directions. The most widely known are Stable Diffusion, Midjourney, and OpenAI’s DALL-E. Allen’s group did not attempt to identify the source.
Microsoft, which offers an image generator based in part on DALL-E, said Friday that it is in the process of investigating whether its tool has been misused. It said that like other commercial AI services, it “does not allow adult or non-consensual intimate content, and repeated attempts to produce content contrary to our policies may result in loss of access to the service.”
There’s still a lot of work to be done in determining AI protections and “we need to move quickly on this,” Microsoft CEO Satya Nadella told host Lester Holt on “NBC Nightly News” in an interview broadcast Tuesday, when asked about Swift deepfakes. told.
To eat. Watch. To do.
What to eat? What to watch? What you need to live your best life… is now.
“This is absolutely alarming and terrible, so yes, we must take action,” Nadella said.
Stability AI, maker of Midjourney, OpenAI and Stable Diffusion, did not immediately respond to requests for comment.
Federal lawmakers, who introduced bills that would impose further restrictions or criminalize deepfake porn, said the incident showed why the United States needed to take better protection measures.
U.S. Rep. Yvette D. Clarke, a Democrat from New York, has introduced legislation that would require creators to: “Women have been victims of non-consensual deepfakes for years, so what happened to Taylor Swift is more common than most people realize.” Digitally watermark deepfake content.
“Generative-AI helps create better deepfakes at a fraction of the cost,” said Clarke.
U.S. Rep. Joe Morelle, another New York Democrat who is pushing a bill that would criminalize the sharing of deepfake porn online, said what happened to Swift was disturbing and was becoming increasingly common online.
“The images may be fake, but their effects are very real,” Morelle said in a statement. “In our increasingly digital world, Deepfakes happen to women everywhere every day, and it is time to put an end to them.”