20.9 C
Los Angeles
Wednesday, June 19, 2024

Breaking News Since 2024

spot_img

Top 5 This Week

Related Posts

Demand Grows For AI Regulation After Swift Deepfake Adult Content

Last week, the release of Taylor Swift’s pornographic deep fakes led to a temporary blockage on the search for the star’s name on the social media platform X, formerly called Twitter.

Before the suspension of the account and the action by the security team of X, the pornography deepfake picture of Swift was viewed nearly forty-seven million times. The team responded by saying, “We have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”

Now the message is, “Something went wrong Try reloading.” appears as soon as someone searches for Swift’s name and “AI Taylor Swift” on the social media website. Joe Benarroch, the head of business operations at X, gave a statement to Variety saying, “This is a temporary action and done with an abundance of caution as we prioritize safety on this issue.”

While condemning the matter, Meta said, “the content that has appeared across different internet services. We continue to monitor our platforms for this violating content and will take appropriate action as needed.” Meta is currently working on the issue.

The “Swifties” came into action and started spreading the positive image of the singer on social media by using and sharing the hashtag #ProtectTaylorSwift on X. Soon, all this hype upped, and the White House took notice of this “alarming” situation. While suggesting legislation against AI misusers, its Press Secretary, Karine Jean-Pierre, gave a statement saying, “We know that lax enforcement disproportionately impacts women, and it also impacts girls, sadly, who are the overwhelming targets.”

The abusive fake imagery, cyberbullying, and harassment would have been stopped to a greater degree had the same treatment been given to non-billionaire victims. Still, there is hope even if the beneficiaries are the elite class only, as currently there are no rules and laws to meet the purpose.

A Democrat, Yvette D. Clarke, who initiated the legislation to ask the creators to watermark the deepfake content, said, “For years, women have been victims of non-consensual deepfakes, so what happened to Taylor Swift is more common than most people realize. Generative-AI is helping create better deepfakes at a fraction of the cost.”

Another Democrat from New York, Joe Morelle, said, “The images may be fake, but their impacts are very real. Deepfakes are happening every day to women everywhere in our increasingly digital world, and it’s time to put a stop to them.” A report released in 2019 shows that deepfake images were used to harass women, mostly in the Hollywood and K-pop industries.

About the incident that took place with Swift, Rolling Stone writer Brittany Spanos said, “This could be a huge deal if she really does pursue it to court.”

An image-based abuse survivor from 11 years ago, Noelle Martin, while remembering that time, commented, “Everyday women like me will not have millions of people working to protect us and to help take down the content, and we won’t have the benefit of big tech companies, where this is facilitated, responding to the abuse.”

Popular Articles