TAKE IT DOWN: A Bipartisan Act Against Digital Exploitation

TAKE IT DOWN: A Bipartisan Act Against Digital Exploitation

Overview:

The TAKE IT DOWN Act is a bipartisan bill passed by Congress on April 28, 2025, aiming to combat non-consensual intimate imagery (NCII), including AI-generated deep fakes.

  • Prohibits non-consensual posting of intimate images of both adults and minors.
  • Requires covered online platforms to remove reported intimate content within 48 hours.
  • Imposes criminal penalties and mandatory restitution on anyone who shares, threatens to share, or facilitates such content.

Since the bill has such uniform support from both sides, we’ll be doing something a little different. Today we'll be taking a look at supporters from politicians to tech giants and critics largely from nonprofits and activist groups.

Support and Endorsement:

The TAKE IT DOWN Act has garnered strong support from lawmakers across the political spectrum, the bill was voted in by the senate unanimously and by congress with 409 in support to just 2 opposed.

Key Supporters:

  • Senator Ted Cruz (R–TX), co-authored the bill, emphasized the need to protect victims of digital exploitation.
  • Senator Amy Klobuchar (D–MN), highlights the bipartisan commitment in safeguarding individuals' privacy and dignity.
  • Tech Industry Leaders like Meta and Google recognize the need for standardized protocols to address NCII.
  • Victim Advocacy Groups such as Cyber Civil Rights Initiative have championed the act.

Currently awaiting executive signature from the president, the TAKE IT DOWN Act reflects a rare consensus in Congress and the general public.

Critiques and Concerns:

Despite its widespread support, the TAKE IT DOWN Act has faced scrutiny from civil liberties organizations and digital rights advocates who caution against potential overreach.

Primary Concerns:

  • Groups like the Electronic Frontier Foundation and the Center for Democracy & Technology argue the act's broad language could suppress lawful content, including satire, journalism, and political commentary.
  • The current verification mechanisms for takedown requests raises the possibility of malicious actors exploiting the system to remove legitimate content.
  • The ACLU has pointed out that the bill could lead to an overreach in regulating free speech online.

Only two sitting elected officials voted against the bill, Representative Thomas Massie (R–KY) and Representative Eric Burlison (R–MO). 

While Rep. Burlison hasn’t released a statement at this time, Rep. Massie expressed concerns about potential overreach and unintended consequences of the legislation.

Nibbles Take:

The exploitation of non-consensual intimate imagery didn’t start with AI—back when I was in middle school in the early 2010s, I watched private photos spread among classmates without permission. Today, 48 states have criminalized NCII (non-consensual intimate imagery), but the TAKE IT DOWN Act marks the first time Congress has tackled the issue at the federal level.

How common is it?

  • Among young adults, rates of non-consensual sharing range from 3% (posting images) to 24% (any form of NCII).
  • 1 in 12 people in the U.S. has experienced NCII.
  • 1 in 20 admit to having engaged in it.

While I understand where the critiques are coming from, this is an issue where I find it hard to empathize with their concerns. The language of the bill (conveniently linked here) is straightforward, with very little room for misinterpretation. There’s no vague or hidden language that could reasonably be used to target lawful content—which is likely why the bill earned such strong bipartisan support.

Beyond the legal clarity, this bill introduces real consequences. Getting explicit images removed from platforms like Instagram or X can take weeks or even months. But under this law, the Federal Trade Commission can fine platforms up to $50,000 per day if they fail to take down flagged NCII—giving tech companies strong incentive to act swiftly.

That said, I do share one concern: the bill doesn’t include a clear exception for satire or parody, which could potentially become a tool for censorship in the wrong hands. Then again, I’ve yet to see any real-life examples of AI parodies or memes involving non-consensual intimate imagery.

While there is room for refinement, this law is a major step forward. Congress should consider adding a narrowly tailored “fair use” exception for satire and parody. Still, it’s encouraging to see both parties come together in support of a bill that gives everyone a clear path to have harmful content removed and those responsible held accountable.

One thought I’ll leave you with: at the current pace of technological advancement, we’re almost certain to see more laws like this introduced in the near future.