LOGO

New Anti-Revenge Porn Law Raises Free Speech Concerns

May 24, 2025
New Anti-Revenge Porn Law Raises Free Speech Concerns

Concerns Arise Over New Law Targeting Revenge Porn and Deepfakes

Advocates for privacy and digital rights are expressing apprehension regarding a recently enacted law designed to combat revenge porn and AI-generated deepfakes, a situation that may seem paradoxical.

The Take It Down Act: Provisions and Potential Issues

The newly implemented Take It Down Act criminalizes the publication of explicit images, whether authentic or artificially created, without the consent of the individuals depicted. Platforms are mandated to adhere to takedown requests from victims within a strict 48-hour timeframe, facing potential legal repercussions for non-compliance.

Despite widespread acclaim as a significant victory for those affected by non-consensual image sharing, concerns have been raised about the law’s ambiguous wording, relaxed verification protocols, and the compressed timeline for compliance.

Potential for Overreach and Censorship

“Large-scale content moderation is inherently flawed and frequently results in the suppression of legitimate and vital expression,” stated India McKinney, Director of Federal Affairs at the Electronic Frontier Foundation, a prominent digital rights organization.

Online platforms are given a year to establish procedures for the removal of nonconsensual intimate imagery (NCII). While the law stipulates that takedown requests originate from victims or their authorized representatives, the verification process is minimal, requiring only a physical or electronic signature – no photographic identification or further validation is mandated.

This streamlined approach, intended to ease the burden on victims, could inadvertently create opportunities for misuse and abuse of the system.

Concerns Regarding Targeting of Marginalized Groups

“I sincerely hope my concerns prove unfounded, however, I anticipate an increase in requests to remove images portraying LGBTQ+ individuals in relationships, and furthermore, I believe it will extend to consensual pornography,” McKinney cautioned.

Senator Marsha Blackburn (R-TN), a key sponsor of the Take It Down Act, also championed the Kids Online Safety Act, which places responsibility on platforms to shield children from harmful online content. Blackburn has previously expressed the belief that content relating to transgender individuals is detrimental to young people.

The Heritage Foundation, a conservative think tank associated with Project 2025, has similarly asserted that restricting access to transgender-related content for children constitutes a protective measure.

The Risk of Premature Content Removal

Due to the potential liability incurred by platforms failing to remove content within the 48-hour window, “the likely response will be to simply remove the content without conducting thorough investigations to ascertain whether it genuinely constitutes NCII, if it falls under protected speech categories, or if it is even relevant to the individual submitting the request,” McKinney explained.

Platform Responses and Decentralized Networks

Both Snapchat and Meta have publicly expressed support for the law, but neither responded to inquiries regarding their methods for verifying the identity of individuals requesting takedowns.

Mastodon, a decentralized platform, indicated it would prioritize removal if victim verification proves excessively challenging.

Decentralized platforms, such as Mastodon, Bluesky, and Pixelfed, may be particularly susceptible to the chilling effect of the 48-hour takedown rule. These networks operate on independently managed servers, often maintained by non-profit organizations or individuals.

The law empowers the Federal Trade Commission (FTC) to classify any platform failing to “reasonably comply” with takedown demands as engaging in an “unfair or deceptive act or practice” – even if the host is not a commercial entity.

FTC Politicization and Broader Implications

“This is inherently problematic, and is especially concerning given the current chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly stated an intention to leverage the agency’s authority to penalize platforms based on ideological grounds, rather than established principles,” stated the Cyber Civil Rights Initiative, a non-profit organization dedicated to combating revenge porn.

The Shift Towards Proactive Content Moderation

Industry analyst McKinney forecasts a move by online platforms towards preemptive content moderation strategies. This approach aims to identify and address potentially harmful material before it is widely distributed, thereby reducing the volume of problematic content requiring removal.

Currently, artificial intelligence is being utilized by numerous platforms to scan for and detect damaging content.

Hive's Role in Detecting Harmful Content

Kevin Guo, the CEO and co-founder of Hive, a company specializing in the detection of AI-generated content, explains that his firm collaborates with various online platforms. Their services focus on identifying deepfakes and child sexual abuse material (CSAM).

Hive’s client base includes prominent platforms such as Reddit, Giphy, Vevo, Bluesky, and BeReal.

Guo stated that Hive endorsed recent legislation, believing it will encourage platforms to implement proactive solutions to critical issues.

Hive operates on a software-as-a-service model, meaning they do not dictate how platforms utilize their tools for content flagging or removal. However, Guo notes that many clients integrate Hive’s API during the upload process to monitor content before it reaches the wider community.

Reddit's Current Moderation Practices

A Reddit spokesperson confirmed to TechCrunch that the platform employs “sophisticated internal tools, processes, and teams” to address and remove nonconsensual intimate images (NCII).

Reddit also collaborates with the nonprofit SWGfl, utilizing their StopNCII tool. This tool scans real-time traffic, comparing it against a database of known NCII, and automatically removes confirmed matches.

The company did not detail the methods used to verify the identity of the individual requesting the removal of content.

Potential Expansion to Encrypted Messaging

McKinney cautions that this trend towards increased monitoring could potentially extend to encrypted messaging in the future.

Although the current legislation primarily addresses publicly or semi-publicly disseminated content, it also mandates platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images.

This requirement, she argues, could incentivize the proactive scanning of all content, even within encrypted environments.

Notably, the legislation does not provide any exemptions for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.

Requests for comment regarding their plans for encrypted messaging sent to Meta, Signal, and Apple have not yet received a response.

Wider Implications for Freedom of Expression

On March 4th, former President Trump addressed a joint session of Congress, expressing support for the Take It Down Act and indicating his anticipation of enacting it into law.

He further stated, “And I’m going to utilize that legislation for my own purposes as well, if that’s acceptable.” He added, “No one experiences more unfair treatment online than I do.”

Although the remark elicited laughter from those present, it wasn't universally perceived as humorous. Trump has consistently demonstrated a willingness to stifle or respond negatively to speech he deems unfavorable. This has manifested in actions such as denouncing mainstream news organizations as “enemies of the people,” restricting access to the Oval Office for the Associated Press despite judicial rulings, and reducing financial support for NPR and PBS.

Recently, the Trump administration prohibited Harvard University from admitting international students, intensifying a dispute that originated when Harvard declined to comply with Trump’s requests for alterations to its academic programs and the removal of Diversity, Equity, and Inclusion (DEI) materials. Consequently, federal funding to Harvard has been suspended, and the university’s tax-exempt status is under threat.

“Considering the current climate, where school boards are attempting to ban books and certain political figures are openly advocating against the dissemination of specific information – be it concerning critical race theory, reproductive health, or climate change – we find it particularly concerning, given our prior work on content moderation, to witness members of both political parties advocating for content moderation on such a large scale,” McKinney observed.

#revenge porn#free speech#online privacy#legislation#First Amendment#digital rights