Act Now to Stop California’s Paternalistic and Privacy-Destroying Social Media Ban

15 hours 26 minutes ago

California lawmakers are fast-tracking A.B. 1709—a sweeping bill that would ban anyone under 16 from using social media and force every user, regardless of age, to verify their identity before accessing social platforms.

That means that under this bill, all Californians would be required to submit highly sensitive government-issued ID or biometric information to private companies simply to participate in the modern public square. In the name of “safety,” this bill would destroy online anonymity, expose sensitive personal data to breach and abuse, and replace parental decision-making with state-mandated censorship.

A.B. 1709 has already passed out of the Assembly Privacy and Judiciary Committees with nearly unanimous support. Its next stop is the Assembly Appropriations Committee, followed by a floor vote—likely within the next week.

Take action

Tell Your Representative to OPPOSE A.B. 1709

California Is About to Set a Dangerous Precedent for Online Censorship

By banning access to social media platforms for young people under 16, California is emulating Australia, where early results show exactly what EFF and other critics predicted: overblocking by platforms, leaving youth without support and even adults barred from access; major spikes in VPN use and other workarounds ranging from clever to desperate; and smaller platforms shutting down rather than attempting costly compliance with these sweeping bills.

California should not be racing to replicate those failures. After all, when California leads—especially on tech—other states follow. There is no reason for California to lead the nation into an unconstitutional social media ban that destroys privacy and harms youth.

Take action

Tell Your Representative to OPPOSE A.B. 1709

What’s Wrong With A.B. 1709?

Just about everything.

A.B. 1709 weaponizes legitimate parental concerns by using them to hand over even more censorship and surveillance power to the government. Beneath its shiny “protect the children” rhetoric, this bill is misguided, unconstitutional, and deeply harmful to users of all ages.

A.B. 1709 Recklessly Violates Free Speech Rights

The First Amendment protects the right to speak and access information, regardless of age. But by imposing a blanket ban on social media access, A.B. 1709 would cut off lawful speech for millions of California teenagers, while also forcing all users (adults and kids alike) to verify their ages before speaking or accessing information on social media. This will immensely and unconstitutionally chill Californians’ exercise of their First Amendment.

These mandates ignore longstanding Supreme Court precedent that protects young people’s speech and consistently find these bans unconstitutional. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks of online engagement. California simply does not have a valid interest in overriding parents’ and young people’s rights to decide for themselves how to use social media.

After all, age-verification technology is far from perfect. A.B. 1709’s reliance on imperfect age-verification technology will disproportionately silence marginalized communities—those whose IDs don’t match their presentation, those with disabilities, trans and gender non-conforming folks, and people of color—who are most likely to be wrongfully denied access by discriminatory systems.  

Finally, many people will simply refuse to give up their anonymity in order to access social media. Our right to anonymity has been a cornerstone of free expression since the founding of this country, and a pillar of online safety since the dawn of the internet. This is for good reason: it allows creativity, innovation, and political thought to flourish, and is essential for those who risk retaliation for their speech or associations. A.B. 1709 threatens to destroy it.

AB 1709 Needlessly Jeopardizes Everyone’s Privacy

A.B. 1709’s age verification mandate also creates massive security risks by forcing users to hand over immutable biometric data and government IDs to third-party vendors. By creating centralized "honeypots" of sensitive information, the bill invites identity theft and permanent surveillance rather than actual safety. If we don’t trust tech companies with our private information now, we shouldn't pass a law that mandates we give them even more of it. 

We’ve already seen repeated data breaches involving age- and identity-verification services. Yet A.B. 1709 would require millions more Californians—including the youth this bill claims to protect—to feed their most sensitive data into this growing surveillance ecosystem. 

This is not the answer to online safety.

Take action

Tell Your Representative to OPPOSE A.B. 1709

AB 1709 Harms the Youth It Claims to Protect

While framed as a safety measure, this bill serves as a blunt instrument of censorship, severing vital lifelines for California’s young people. Besides being unconstitutional, banning young people from the internet is bad public policy. After all, social media sites are not just sources of entertainment; they provide crucial spaces for young people to explore their identities—whether by creating and sharing art, practicing religion, building community, or engaging in civic life. 

Social science indicates that moderate internet use is a net positive for teens’ development, and negative outcomes are usually due to either lack of access or excessive use. Social media provides essential spaces for civic engagement, identity exploration, and community building—particularly for LGBTQ+ and marginalized youth who may lack support in their physical environments. By replacing access to political news and health resources with state-mandated isolation, A.B. 1709 ignores the calls of young people themselves who favor digital literacy and education over restrictive government control.

Young people have been loud and clear that what they want is access and education—not censorship and control. They even drafted their own digital literacy education bill, A.B. 2071, which is currently before the California legislature! Instead of cutting off vital lifelines, we should support education measures that would arm them (and the adults in their lives) with the knowledge they need to explore online spaces safely.

AB 1709 Is Misguided and Won’t Work

In case you needed more reasons to oppose this bill.

  • A.B. 1709 Replaces Parenting With Government Control. Families know there is no one-size-fits-all solution to parenting. But AB 1709 imposes one anyway, overriding parental decision-making with a blanket censorship prohibition. Parents who want to actively guide their children’s online experiences should be empowered, not relegated to the sidelines by a blunt state mandate.
  • A.B. 1709 Strengthens Big Tech Instead of Challenging It. Supporters claim that this bill will rein in the major tech companies, but in fact, steep fines and costly compliance regimes disproportionately harm smaller platforms. Where large corporations can afford to absorb legal risk and shell out for expensive verification systems, smaller forums and emerging platforms cannot. We’ve already seen platforms shut down or geoblock entire states in response to age-gating laws. And when the small platforms shutter, where do all of those users—and their valuable data—go? Straight back to the biggest companies.
  • A.B. 1709 Creates Expensive and Shady Bureaucracy During a Budget Crisis. California is facing a massive deficit, but A.B. 1709 would waste taxpayer dollars to fund a shadowy new "e-Safety Advisory Commission" to enforce this ban and dream up new ways to censor the internet. In addition, lawmakers in support of A.B. 1709 have already admitted that this bill is likely to follow the same path as other recent "child safety" laws that were struck down or blocked in court for First Amendment and privacy reasons. With A.B. 1709, taxpayers are being asked to hand over a blank check for millions in legal fees to defend a law that is unconstitutional on its face.
Californians: Act Now to Kill This Bill

A.B. 1709 is not an inevitability, as some supporters want you to believe. But we need to act now to support our youth and their right to participate in online public life.

Your representatives could vote on A.B. 1709 as soon as next week. If you’re a Californian, email your legislators now and tell them to vote NO on AB 1709.

Take action

Tell Your Representative to OPPOSE A.B. 1709

Molly Buckley

EFF Challenges Secrecy In Eastern District of Texas Patent Case 

15 hours 40 minutes ago

Clinic students Emily Ko and Zoe Lee at the Technology Law and Policy Clinic at the NYU School of Law were the principal authors of this post.

Courts are not private forums for business disputes. They are public institutions, and their records belong to the public. But too often, courts forget that and allow for massive over-sealing, especially in patent cases. 

EFF recently discovered another case of this in the Eastern District of Texas, where key court filings about Wi-Fi technology used by billions of people every day were hidden entirely from public view. The public could not see the parties’ arguments about patent ownership, the plaintiff’s standing in court, or licensing obligations tied to standardized technologies.

EFF Seeks to Uncover Sealed Information in Wilus 

The case Wilus Institute of Standards and Technology Inc. v. HP Inc., highlights a recurring transparency problem in patent litigation. 

Wilus claims to own standard essential patents (SEPs) related to Wi-Fi 6 — technology embedded in everyday devices. Wilus sued Samsung and HP for patent infringement. HP argued that Wilus failed to offer licenses on Fair, Reasonable, and Non-Discriminatory (FRAND) terms, which are required to prevent SEP holders from exploiting their position, by blocking fair access to widely used technologies. 

In reviewing the docket, EFF found that many filings were improperly sealed under a lenient protective order without the required, specific justification needed in a proper motion to seal. Because there is a presumption of public access to court filings, litigants must file a motion to seal and demonstrate compelling reasons for secrecy. This typically requires a document-by-document and line-by-line justification. 

In the Eastern District of Texas, that standard is often not enforced. Instead, district judges allow litigants to hide information using boilerplate justification in a protective order without explaining why specific documents or specific parts in a document should be hidden. 

In Wilus, two sets of documents stood out. 

First, Samsung moved to dismiss the case, arguing Wilus may not have validly obtained the patents — raising doubts about whether they had standing to sue at all. Wilus’s opposition to that motion was filed completely under seal, with no redacted public version available at all. That briefing likely addresses the patent assignment agreements that underpin Wilus’s business model — information the public has an interest in, especially in cases involving non-practicing entities (NPEs) like Wilus. 

Second, filings related to HP’s supplemental briefing on FRAND obligations were also sealed in full, with no redacted versions available to the public. Whether Wilus is bound by FRAND has implications far beyond this case. Companies subject to FRAND must adhere to reasonable licensing terms, while those that are not can charge significantly higher licensing fees. 

In both instances, the public was shut out of arguments that bear directly on how essential technologies are licensed and controlled.

EFF Pushes For Public Access 

EFF raised these concerns with Wilus’s counsel and pressed for public access to the sealed records. Wilus ultimately agreed to file redacted versions of several documents now available as Document Numbers 387, 388, and 389

That result is progress, but it shouldn’t require outside intervention. Public versions of court filings should be the default, not something negotiated after outside pressure.

Even now, these newly filed redacted versions conceal significant portions of the parties’ arguments. The public still cannot fully see how this case about technologies that are used every day is being litigated. 

Why Public Access Matters 

Sealing court records is designed to be rare. To overcome the presumption of public access, litigants must show compelling reasons for secrecy. That’s because open courts are a distinguishing feature of American democracy. The public, journalists, and policymakers all have the right to observe proceedings and hold both government actors and private litigants accountable. 

Some filings do contain trade secrets or commercially sensitive information. But that doesn’t mean litigants should be able to hide information without explaining why. The Eastern District of Texas allows litigants to bypass the requirement to explain why.

EFF confronted this very same issue in its attempt to intervene in another Eastern District of Texas case, Entropic v. Charter. The same pattern appeared again in Wilus: instead of narrowly tailored redactions supported by specific reasoning, filings were withheld wholesale. 

Courts Must Enforce the Standard

Courts, not third parties, are responsible for protecting the public’s right of access. 

That means enforcing the “compelling reasons” standard, as a matter of course. Parties seeking to seal sensitive information should be required to justify each proposed redaction. The Eastern District of Texas’ current approach falls short. By allowing broad, unsupported sealing through expansive protective orders, it effectively treats judicial records as confidential by default. 

Heavy caseloads don’t change the rule. Administrative burden cannot override constitutional and common law rights. Judicial records are presumptively public. Courts, including the Eastern District of Texas, should enforce that presumption. 

Other Federal Courts Get It Right 

The Eastern District of Texas is an outlier. In the Northern District of California, judges routinely reject overbroad sealing requests. As Judge Chhabria’s Civil Standing Order explains: 

[M]otions to seal . . . are almost always without merit. . . . Federal courts are paid for by the public, and the public has the right to inspect court records, subject only to narrow exceptions. 

The filing party must make a specific showing explaining why each document that it seeks to seal may justifiably be sealed . . . Generic and vague references to “competitive harm” are almost always insufficient justification for sealing. 

This approach reflects the law: sealing must be narrowly tailored and specifically justified.

Court Transparency is Fundamental 

At first glance, secrecy in patent litigation may not seem alarming. But it signals a broader erosion of transparency. The widespread use of expansive protective orders in the Eastern District of Texas is a practice that risks spreading if courts do not enforce the law. 

These practices allow private parties to obscure information about disputes involving technologies that shape modern life. That undermines a core principle of a free society: transparency regarding the actions of powerful actors. 

Courts are not private forums for business disputes. They are public institutions, and their records belong to the public. 

So long as these practices continue, EFF will keep advocating for transparency and working to vindicate the public’s right to access court records.

Betty Gedlu

California Coastal Community Must Reject CBP's AI-Powered Surveillance Tower

18 hours 33 minutes ago

Customs and Border Protection (CBP) is seeking permission from the California city of San Clemente to install an Anduril Industries surveillance tower on a cliff that would allow for constant monitoring of entire coastal neighborhoods. 

The proposed tower is Anduril's Sentry, part of the Autonomous Surveillance Tower (AST) program. While CBP says it will primarily monitor the coastline for boats carrying migrants, it will actually be installed 1.5 miles inland, overlooking the bulk of the 62,000-resident city. By CBP's own public statement, the system–which combines video, radar, and computer vision–is "constantly scanning" for movement and identifying and tracking objects an AI algorithm decides are of interest. Depending on the model–the photos provided by CBP indicate it is a long range maritime model–the camera could see as far as nine miles, which would cover the entire city and potentially see as far as neighboring Dana Point.

"The AST utilize advanced computer vision algorithms to autonomously detect, identify, and track items of interest (IoI) as they transit through the towers field of view," CBP writes in a privacy threshold analysis. "The system can determine if an IoI is a human, animal, or vehicle without operator intervention. The system then generates and transmits an alert to operators with the location and images of the IoI for adjudication and response." 

On April 28, local residents and Oakland Privacy, a privacy- and anti-surveillance-focused citizens’ coalition, are holding a town hall to inform the public about the dangers of this technology. We urge people to attend to better understand what's at stake. 

"The planned deployment of an Anduril tower along a heavily used Orange County coastline 75 miles from the border demonstrates that the militarization of the border region is rapidly moving northwards and across the entire state," writes Oakland Privacy. 

City officials raised concerns about resident privacy and proposed that a lease agreement include a prohibition on surveilling neighborhoods. CBP rejected that proposal, instead saying that they would configure the tower to "avoid" scanning residential neighborhoods, but the system would remain capable of tracking human beings in residential areas. According to the staff report: 

In response to privacy concerns, CBP has stated the system would be configured to avoid scanning residential areas that fall into the scan viewshed, focusing the system on the marine environment. CBP has maintained the purpose of the system is specifically maritime surveillance, and the system would be singularly focused on offshore activities. However, there may be an instance in which there is an active smuggling event, detected by the system at sea, in which the subsequent smuggling event traverses through the residential neighborhoods. In such a case, the system may continue to track and monitor. To restrict this functionality would be contrary to the spirit and intent of the deployment. Therefore, they cannot make such a contractual obligation.

The Anduril towers retain a variety of data, including images and more. 

The proposed Anduril surveillance tower. Source: City of San Clemente

"The AST capture and retain imagery which occurs in plan view of the tower sites and is stored as an individual event with a unique event identified allowing replay of the event for further investigation or dismissal based on activity occurring," according to the private threshold analysis.

The document indicates a potential 30-day retention period for imagery, but then contradicts itself to say that data will be held indefinitely to train algorithms: "AST will also be maintaining learning training data, these records should not be deleted." This means that taxpayers would be paying for the privilege of having their data turned into fuel for Anduril's product.

In 2020 CBP said it would work with National Archives and Records Administration (NARA) to develop a retention schedule for training data (i.e., a timeline for deletion). However, when EFF filed a Freedom of Information Act (FOIA) with NARA, the agency said there were no records of these discussions. Likewise, CBP has not provided records in response to the FOIA request EFF filed with them seeking the same records. 

Anduril Maritime Sentry in San Diego, where the border fence meets the ocean.

This would not be the first CBP tower placed along the coastline in California. EFF identified one in Del Mar, about 30 miles from the border, and another in San Diego County where the border fence meets the Pacific Ocean. CBP has also applied to place towers–although not necessarily the Anduril model–in or near several other coastal locations: Gaviota State Park, Refugio State Park, Vandenberg Air Force Base, Piedras Blancas and Point Vicente. The California coastline isn’t the only coastline dotted with surveillance towers. The Migrant Rights Network has also documented numerous Anduril towers along the southeast coast of England. Where the San Clemente tower would differ is that there is a substantial population between the tower and the beach, and because it's a 360-degree system, it can watch neighborhoods even further from the coast. 

However, this won't be the first time an Anduril tower has been placed next to a community. EFF has documented numerous Anduril towers in public parks along the Rio Grande in Laredo and Roma, Texas. In Mission, Texas, an Anduril tower was placed outside an RV park: the tower could not even see the border without capturing data from the community. Because AI can swivel the cameras 360 degrees, two churches were within the "viewshed" of that tower. 

Click here to view EFF's ongoing map of CBP surveillance towers.

Many border surveillance towers are placed on city or county property, requiring a lease to be approved by the local governing body–as is the case with San Clemente. In 2024, EFF and Imperial Valley Equity and Justice organized an effort to fight the renewal of a Border Patrol's lease for a tower next to a public park. The coalition lost narrowly after a recall election ousted two officials who were critical of the lease.

CBP is rapidly increasing the number of towers at the border and beyond, recently announcing the potential to install 1,500 more towers in the next few years–more than tripling what we've documented so far–at a cost of more than $400 million to the public for maintenance alone. This is despite more than 20 years of government reports that have documented how tower-based systems are ineffective and wasteful.

It's time to fight back. 

Dave Maass

EFF to 9th Circuit (Again): App Stores Shouldn’t Be Liable for Processing Payments for User Content

1 day 16 hours ago

EFF filed an amicus brief for the second time in the U.S. Court of Appeals for the Ninth Circuit, arguing that allowing cases against the Apple, Google, and Facebook app stores to proceed could lead to greater censorship of users’ online speech.

Our brief argues that the app stores should not lose Section 230 immunity for hosting “social casino” apps just because they process payments for virtual chips within those apps. Otherwise, all platforms that facilitate financial transactions for online content—beyond app stores and the apps and games they distribute—would be forced to censor user content to mitigate their legal exposure.

Social casino apps are online games where users can buy virtual chips with real money but can’t ever cash out their winnings. The three cases against Apple, Google, and Facebook were brought by plaintiffs who spent large sums of money on virtual chips and even became addicted to these games. The plaintiffs argue that social casino apps violate various state gambling laws.

At issue on appeal is the part of Section 230 that provides immunity to online platforms when they are sued for harmful content created by others—in this case, the social casino apps that plaintiffs downloaded from the various app stores and the virtual chips they bought within the apps.

Section 230 is the foundational law that has, since 1996, created legal breathing room for internet intermediaries (and their users) to publish third-party content. Online speech is largely mediated by these private companies, allowing all of us to speak, access information, and engage in commerce online, without requiring that we have loads of money or technical skills.

The lower court hearing the case ruled that the companies do not have Section 230 immunity because they allow the social casino apps to use the platforms’ payment processing services for the in-app purchasing of virtual chips.

However, in our brief we urged the Ninth Circuit to reverse the district court and hold that Section 230 does apply to the app stores, even when they process payments for virtual chips within the social casino apps. The app stores would undeniably have Section 230 immunity if sued for simply hosting the allegedly illegal social casino apps in their respective stores. Congress made no distinction—and the court shouldn’t recognize one—between hosting third-party content and processing payments for the same third-party content. Both are editorial choices of the platforms that are protected by Section 230.

We also argued that a rule that exposes internet intermediaries to potential liability for facilitating a financial transaction related to unlawful user content would have huge implications beyond the app stores. All platforms that facilitate financial transactions for third-party content would be forced to censor any user speech that may in any way risk legal exposure for the platform. This would harm the open internet—the unique ability of anyone with an internet connection to communicate with others around the world cheaply, easily, and quickly.

The plaintiffs argue that the app stores could preserve their Section 230 immunity by simply refusing to process in-app purchases of virtual chips. But the plaintiffs’ position fails to recognize that other platforms don’t have such a choice. Etsy, for example, facilitates purchases of virtual art, while Patreon enables artists to be supported by memberships. Platforms like these would lose Section 230 immunity and be exposed to potential liability simply because they processed payments for user content that a plaintiff argues is illegal. That outcome would threaten the entire business models of these services, ultimately harming users’ ability to share and access online speech.

The app stores should be protected by Section 230—a law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on—irrespective of their role as payment processors.

Sophia Cope

Speaking Freely: Lizzie O'Shea

1 day 18 hours ago

Lizzie O’Shea is an Australian lawyer, author, and the founder and chair of Digital Rights Watch, which advocates for freedom, fairness, and fundamental rights in the digital age. She sits on the board of Blueprint for Free Speech, and in 2019 was named a Human Rights Hero by Access Now.

Interviewer: Jillian York

Jillian York: Hi, good morning, or rather, good evening for you.

Lizzie O’Shea: Hi Jillian, it's great to be here. 

JY: I'm going to start with asking a question that I try to kick off every interview with, which is, what does free speech or free expression mean to you?

LO: Yes, so Digital Rights Watch, which is the organization I founded and I chair, is focused on fundamental rights and freedoms in the online world. And so freedom of speech is obviously a big part of that. It's obviously a very vexed right, partly because of its heritage and interpretation in places like the United States, which sometimes sits in contrast culturally to other parts of the world. Certainly, if you ask Australians about it, they do not want to have a culture of free speech that looks like the United States. 

Australians understand that freedom of expression is a really important component of democracy. So one of my jobs is to make the claim that curtailing freedom of speech, including in online settings, can have a real impact on democracy. And I think that's fundamentally true, and you don't want to wait until it's too late to be able to make that argument, to ensure that the policies are in place to protect that freedom. So I think it's a really important freedom. It's got a vexed history and expression in the modern online world, but many people still instinctively understand that those in power see speech as something that is important to challenging their authority, and so it can be a really important place to fight back and protect democracy and other rights from being impacted by those who hold power at the moment.

JY: I want to ask you about your book. You're a critic of techno-utopianism. Your book, Future Histories, came out right before the pandemic, if I recall, and it looks to the past for lessons for our technological and cultural future. I really appreciated your take on Elon Musk. So I guess what I want to ask you about is two things. What, in your view, has changed since you wrote it?

LO: Yeah, that's a really interesting question. I must admit, I was thinking about it the other day whether some of what I wrote really holds up. And I think the fundamentals are still true, in the sense that I still believe that a lot of the discussions and debates we have about technology today are presented as fundamentally novel when they are very old, ancient discussions and debates about how power should be distributed through society, and how technology enables that kind of power distribution or works against it, right? So I feel like that fundamental analysis, whatever contribution to the field, is still valid, of course. In some ways though, those technical systems have become more opaque, like the artificial intelligence industry and how that's been built off the back of years of exploitation of personal information and centralization of power in technology companies. Those things have become more powerful and concentrated and difficult to understand—if you're not deep in the weeds—beyond an instinctive understanding that something's going a bit wrong, perhaps. 

So in some ways those trends have exacerbated things in ways that I think many other contributors, yourself included, have brought a really important set of analyses to these discussions. More generally, though, one of my fundamental understandings of how I frame some of these arguments is that there are two sources of power, right? Government power and corporate power that really shape how the online world is developing. And post-pandemic, there's a lot greater skepticism, criticism, and outright distrust of government authorities seeking to do work to protect people from some of those corporate excesses. Now that's obviously something that is much more part of American culture as opposed to European culture, and in Australia, we sit somewhere in between. But that skepticism and that mistrust of institutions, I don't know that that serves us well. I'm somebody who does treat with criticism policies put forward by government, because I think it's our job as civil society people, as people part of a social movement that want to have rights at the center of our society, to be critical of those in power and make sure that they're being held accountable. But that mistrust has fundamentally shifted how possible it is to do that in an effective way. And I think that poses real challenges for people who want to see government policy look different to how it is and how you can bring people into a sense of trust, investing in a democratic rights based society, rather than rejection and cynicism being the overriding, overriding kind of factor in how they shape their political arguments. Which is a real challenge, I think, for people like us who rely on some of that mistrust and skepticism in order to fuel the fire of some of these campaigns, but do want to see people still invested in democratic processes.

JY: Yeah, absolutely. So speaking of policies, you're in Australia, where the government's enacted some of the strictest social media laws for minors in the world, I would say. In one of our most recent interviews, which was with Jacob Mchangama, we talked about how the comparison of social media to Big Tobacco is spreading, and this idea that there's no utility in social media for minors, that it's a net harm. I'm curious what your thoughts are on that, and then we can dive into the more nitty gritty bits of the Australian law.

LO: I think that's a great place to start, because the overwhelming sense in how this policy was presented to the public in Australia is that this is a very dangerous place for young people to be, and that desperate times call for desperate measures. “We don't have time to fix these spaces. We need to just restrict access.” It's described as a delay. Many, including me, describe it as a ban for under 16 year olds. So what has been very interesting in this discussion is who's been left out of the conversation. And if you talk to young people—and there are many organizations working with young people—and you talk to them about what they use social media for, they often say that they wish adults understood that they used it for different reasons, or they're scared about different things than what adults think they might be scared of. And so that kind of fundamental failure of communication, which I suppose is not a surprise, when these people don't actually have the power to vote, have the power to do things a normal legal person would do, is somewhat unsurprising.

But when you're making policy about these people, that can be quite impactful, it can have very detrimental impacts. And if you take a human rights approach, that is your job to think about the negative impact on human rights, and what you're going to do about it, it's not really good enough. And this has been an experiment that Australia has led on, very much, looking for headlines, for a perception of boldness. Some of that claim is legitimate in the sense that they want to be seen to be taking action, and a lot of people feel very concerned that governments aren't prepared to take action against big tech companies. So, some of that is a valid feeling. But I think in this context, we lose so much when we don't actually listen to the people affected, and listen to the myriad ways in which they use social media. Some things they're concerned about, some things they find harmful, some things they're really sick of. But there's so many ways in which they use it to find a sense of community, to find a sense of empowerment, to talk to people they would never otherwise be able to access, sometimes because they're isolated, socially, geographically, whatever it may be, and it's so disappointing to me that that kind of part of the conversation was not had as we debated this particular policy.

JY:  So, what do you think some of the harms are for youth who can't access social media? What are young people losing out on? Who is harmed by these laws?

LO:  It's a great question. When we do a human rights analysis, we have to think about who's harmed by a particular policy, even if we think it's overall justified on a utilitarian ground, say it's better off for everyone overall who's harmed, is a really important question, and so much of that has been absent from this discussion. So it's not just me. It's like hundreds and hundreds of experts in Australia and organizations that represent many, many people, have provided commentary and input into this process and expressed many concerns about this policy, and there's a few different ways in which people are harmed. 

So the first thing, of course, is that if you require that age verification occur, you're engaging in a privacy violation for many people, there are cyber security risks with collecting that kind of information. There's deterrent effects and the like. Now that may not concern you, or you may think that's a justifiable kind of infringement on privacy rights, but I think that's worth mentioning. It is quite significant, especially in a world in which age verification doesn't tend to work very well on any measure. There are very serious cybersecurity risks that have been associated with age verification processes and the like. So it's certainly not nothing. The other set of people that are harmed are particularly vulnerable people. 

There's a variety of people who are still accessing social media. So it looks like about seven in ten of young people on the early data who had social media accounts are still accessing social media now. Now these are early figures, so there's a lot to be said for looking at how this works in a year's time, for example. But I think one of the interesting things to think about is when those people, young people, who are on still on social media—in breach of this ban or in defiance of this ban, however you want to put it—might need to engage in help seeking behavior, there may be a deterrent there, because they know that the law is they're not supposed to be accessing social media. So that is a selection of young people that we're particularly concerned about. And then, more generally, of course, there's a whole cohort of people who are particularly vulnerable. Maybe they're LGBTIQ, maybe they're in an isolated geographic area, far away from a city. Maybe they're experiencing harm at home and have no one to talk to about it. There's all sorts of ways in which young people use social media to manage their own challenges, harms, difficulties, and very effectively. They find people to talk to about their problems when other people may not be available to them. And that is an issue that is hard to map, right? We know that there's been an increase in calls to things like Kids Helpline, which does what it says on the tin. So those kinds of things have seen an increase. But I think that is something that is harder to map, but still very, very important, and may result in people going to other parts of the internet as well to seek help in different ways that might also not be very safe for them. 

More generally it's worth remembering that if platforms can say with some confidence, from a policy perspective, that young people are no longer on their platform, there is less incentive to design for them as well, which is another associated problem. Now, it remains unclear as to how platforms are dealing with that issue, especially in light of the most recent data, which suggests that a lot of young people remain on the platforms. But that's an issue. Do we then allow platforms to no longer design in a way that respects the autonomy of young people, the safety of them, their security and the like, because they have special needs and interests and all those sorts of things. So that's another problem. There's lots of operational problems. There's lots of conceptual ones. I don't think many of these have been considered or accounted for in the process.

JY: Absolutely, those are the same things that worry me as well. Okay, let's talk about the campaign. So what has the pushback to this, to the law, looked like, and what changes were you calling for?

LO: Well, if I can Jillian, what I might start with is where the push came from. Because I think that's quite instructive. One of the key sets of institutions that were pushing for this ban were mainstream news organizations, and we're learning a bit more about this over time, but the Murdoch press and other large news organizations in Australia—Australia has one of the most concentrated media environments in the world—were pushing for this ban. There was a petition run on one of their websites that was gathering tens of thousands of signatures. There were also others. Then there was a lot of advocacy towards specific kinds of political leaders in the country, and then a kind of competitive race to see who could be the most extreme in terms of putting forward a policy. But it's certainly the case that this very powerful set of actors in our democracy, at least, were a key driver of this campaign for a social media ban for young people. Now, I think there's a sense of moralism about it, a sense of desperation about it, tapping into genuine fears from parents, you know, and the like. And you know, The Anxious Generation, the book by Jonathan Haidt, has obviously been very influential with many people, but the research is still a bit unclear, right? About what this all means. And lots and lots of researchers will tell you that that book isn't making a reasonable argument based on the data that we have, right? So, it's a very febrile environment for this kind of discussion, and those kinds of institutional actors were incredibly important in getting this on the political agenda.

We then had an electoral campaign, definitely a vision that conservative politics would push for this. So labor politics, you know, center left politics pushed for it, and won the election, right? Not on this issue alone, but it was in that environment in which this policy was developed. There was a very small amount of time for submissions, for policy discussion about it. Initially, the government had said they weren't going to do it because they were concerned that the age verification technology wasn't up to scratch. That changed very, very quickly, and then the policy was introduced. I think it was in six days, some very small amount of time. So many different child rights organizations, academics, institutions, filed policy submissions to discuss this, did a lot of advocacy work, but the passage of time between the announcement of the proposal and the passage of the legislation was extremely short, and what followed has been a year of discussion around whether this was a good thing, a year of testing age verification technology, often finding it wanting, but setting up a set of of preferred providers that platforms could use in order to satisfy the legislative requirements. A lot of lobbying from platforms as to whether they're in or out. There was a big discussion about whether YouTube should be in or out. And a lot of back room dealing between relevant politicians and big tech companies. So the whole thing is very unseemly, and we're now in the world where it's been introduced, a lot of failure for it to actually operationalize now. Now, it may be that that changes over time, but that's quite telling, right? 

It's telling also because I don't think all parents particularly like this proposal either. It's very popular, but there's certainly a section of parents that are facilitating their children's continued access to social media. And I think that's interesting in itself. Part of what it is—something we were talking about actually earlier in our conversation—people don't like governments telling them how to parent their children. That has taken some very negative expressions in parts of the world, you know, resistance to things like the availability of medicine and treatment for kids who might be trans. But in this context, it's like, “I'm not going to let the government tell me that I can't let my kid on social media.” So, I don't think it's clarified much in the debate in terms of understanding how platforms behave towards young people, what they could do better, of which there's many things, and then how we get to the world in which children are able to be online but better protected. I'm not sure this proposal has contributed to that. It's really muddied the waters about what the government is capable of doing, what it should be doing, and what platforms, you know, what should be the process that platforms go through when thinking about designing for children.

JY: That's such a great answer. Thank you. And actually, that brings me to another question, which is so in your ideal world, taking this law, being able to throw it out the window if you want…What would you what would you want to see, not just from social media, but from from the platforms, from governments, both for the sake of youth, but also, you know, for all of us.

LO: I think that is the exact right question to be asking, and it's a good time that we've managed to talk now, because actually, in the interim, what's come out is at the first draft that we've got of a Children's Online Privacy Code. And to me, that is really revealing, because it is designed to apply to all services that might be accessed by children, like all online services, and it has a really kind of sophisticated understanding of what consent might look like, where you need help with getting consent, when it comes to parents or adults that are supportive in your life. And then at different ages that might look a bit different, like you might get notified if consent has been refused by your caregiver, for example, if you've wanted to do something. So there's a more sophisticated understanding of what consent looks like, and a range of different restrictions on when private, when personal information can be collected and used.

It's got things in it that I don't particularly like. I would like to see a prohibition on the commercial exploitation of children's personal information, because I don't think any targeted advertising is justified, for example. And I think that kind of measure of that commercial exploitation is hugely problematic. I think we have to think about what deletion looks like. I think you should have a right to deletion, for example. But you know, we also have to respect that children grow into young adults, that making decisions at 16 might look quite different to when they're three. So what you do with their personal information, how they carry that forward into their adult lives might be different depending on the age and so that kind of privacy reform actually is the fundamental thing. I’m sure your listeners don’t need reminding of this.

That is my favorite right. Because I think restricting access to personal information is a rights-respecting way to improve the online environment for everybody. And what I think is really interesting about this Children's Online Privacy Code that is still in draft form, is that all these things should be available to adults as well. Like adults in Australia don't have the right to deletion at the moment. We don't have a right to comprehensively know where our information has traveled and to delete it. You know, look, we have fewer rights than Californians, for example, certainly fewer rights than Europeans. What this code has highlighted is that, in fact, all people should be enjoying this kind of protection that comes from restricting access and use of personal information and giving people more control over that, because that personal information is the raw material of the business model, and it leads to a very loose approach to its collection and leads to many negative downstream consequences, I would argue, including business models that prioritize engagement, that prioritize and monetize polarizing, extremist content, mis- and disinformation.

I think we could have a real crack at trying to ameliorate some of these problems, or certainly reduce their impact, if we started that fundamental raw material that fuels the business model. So that, I think, is a really telling alternative that we're now considering as a society, and I like to think that people will come to an understanding that you can you can find ways to elevate improve the online world, particularly for young people, without restricting their access to that online world in a way that is empowering for them, rather than patronizing or infantilizing. 

JY: I completely agree, and I think it's funny that people often see privacy and expression at odds with each other, when actually I think privacy enhances expression.

LO: I think it makes spaces safer, makes people freer to be able to say what they think, but also to have those discussions in ways that are more meaningful, that can help find connections, even across divisions, rather than exploiting that division for profit, which is so much of the current business model.

JY: Are there any other things happening in Australia that EFF’s readers should know about?

LO: Well, we're about to go through the second tranche of our privacy reform. So we did engage in our first tranche of privacy reform. We have a Privacy Act that was passed in 1988 and hasn't been meaningfully updated in the decades since. So we got a few small changes, which included the enabling provision to allow a Children's Online Privacy Code to be developed, which is why we're getting the benefit of that now. But we're about to see a range of different privacy laws introduced. What the content is, of course, will be the subject of a lot of discussion and debate. We're going to argue for the right to deletion, the right to a private right of action for privacy harms, better processes for consent, and improved definitions of personal information to really bring Australia in line with lots of other similar jurisdictions around the world. And we're really keen to advance that for all the reasons that I just mentioned. 

The other big change that I think is coming is that, you know, which is perhaps more on topic for this conversation, is that we've had this online safety policy that is constantly being touted as the first in the world, and world leading and this and that, and it's really been a very flawed and vexed process working out how we could develop codes that were designed to govern how certain services were provided in the digital age, in line with safety expectations. There’s been a lot of focus on complaints and take down notices and things like that, there's obviously been that vexed litigation with Elon Musk, trying to get him to take down a particular video, and ultimately, the failure of our regulators to succeed on that front, I think, probably correctly, because giving a regulator in Australia the right to take down content from anywhere in the world seems to me a very concerning development, if that was allowed to proceed. So this history of online safety, it's been a big part of successive Australian governments’ identities. We're about to see the introduction of a digital duty of care. So that's certainly the stated position of government. What that looks like in practice, I think will be really interesting. 

I like the idea of a digital duty of care. I like the idea of a flexible, overarching concept. What the content is, though, will be really important. So what I would like to see is proactive disclosure of harm or risk of harm, and then actions taken by platforms to do it. So more onus on platforms to provide transparency about what they know about how their online spaces are being used and what might be harmful. I mean, there's a question around whether we'll see an introduction of a civil right, something similar following from the litigation that’s taken place in California and New Mexico, and that is going to be leading, really, multiple claims that are being made all around the country in the US, against companies like Meta and Google and other social media platforms. So I think there may be a flow-on effect from that, as in, it might turn into a civil right to sue for failure to meet the requirements of digital duty of care. But I'm really interested to hear from any of your listeners, or anyone who's working in this space about what the content should be of that digital duty of care, because there's obviously limits as well. Like it can be not rights-respecting, and we're interested in making sure that's not the case. And I think there's probably a range in which it could be more protective or less and working out how to do that—there are examples from around the world, but that's going to be something I reckon we could use help with that we want to get right and make use of that opportunity as best we can. 

The last thing I'll say, I suppose, is that our government is always looking for ways to deal with mis- and disinformation, and that comes with real risks of censorship. And so, I think there's a strong argument to focus on privacy reform, because it's a rights-respecting reform as an antidote to mis- and disinformation. Greater transparency on platforms—I think about how they prioritize content in your feed, for example, can be useful, or reporting on what content is really popular, like ad libraries. There's all sorts of ways in which we can introduce greater transparency, but I do worry that as governments around the world feel emboldened to do so, they might look for more ways to to remove content, to be more involved in content moderation policies that have the real potential to to become censorship if we're not careful. So that's the other abiding concern I've got about Australian policy at the moment.

JY: One of my big concerns now too, is all of these authoritarian governments watching Australia, watching the UK, and enacting laws that are modeled on, but much more severe than than the ones in those places? Do you share that concern? 

LO:  Yeah. I mean, the other way in which it's come about in Australia, certainly like anti-doxxing laws, which, at the moment, we've got laws on our books that came about attached to a privacy reform. I'm hesitant to say it's a privacy reform, because it's not, but it's very egregious. It's a criminal offense to disclose basic details about someone online, if it's done with a set of intents and the like, about their particular status as a group, and that, I think you could drive a truck through in terms of how you could interpret it, right? There's such a wide variance, and bringing a proceeding against someone like prosecuting them for that is such a life altering experience. And I think if governments did want to focus on particular activists. And I'm particularly thinking of, you know, the way it was framed was certainly around the the discussion and debate about the genocide unfolding in Gaza. Like, I think, particularly about that movement, they're very vulnerable to crackdowns by government for speech that is perceived to be unacceptable by government. 

And I'm not even trying to debate it. I think there's certainly antisemitic commentary occurring in Australia, and indeed, there have been some people, like genuine Nazis arrested, which, you know is, is a different kettle of fish. But I think progressive movements, not just the defense of Palestine movement, but lots of other progressive movements are a particular risk of those kinds of laws. But I think mis- and disinformation is the other vehicle. So we have to be very careful about giving platforms, giving regulators both the mandate and then the authority to police content based on particular criteria. And often what they talk about, or they talked about in proposals that have now died in Australia, were things like public health issues. So, you know, that's a particular consent that drives a lot of people who are very concerned about the years of Covid up the wall. So it inspires a lot of reaction to it. But I think there's lots of ways in which undermining political stability is put forward as a proposal, as a justification for removing content. That's just so broad that I think you could really start to see censorship. It's just not good enough. I just don't think we can tolerate those kinds of proposals. I like to think that's not the case in Australia, but I just think there's a tendency among governments now to see this as an opportunity. It's an anxiety lots people have about mis- and disinformation, and so they draw on that as a mandate to act. And I think we should be very cautious about those proposals.

JY: Definitely. Okay, I’m going to ask the final question that I ask everyone. Who is your free speech or free expression hero? Or someone from history, or even someone personal who has influenced you?

LO: There’s a chapter in my book where I talk about the Paris Commune, which happened a long time ago, but I still think it’s a really interesting experiment in applied democracy. This is when a bunch of communauts took over Paris and started doing things differently in a variety of different ways. Gustave Coubert is this artist who’s leading the artist collective during this time, and I always found him entertaining because he would paint things that weren’t expected. So, often, nudes that were considered quite scandalous because they were everyday women who weren’t angelic or Madonna-esque in their style, but he’s got a very famous painting of female genitalia—

JY: Yes! Facebook took it down! [laughs]

LO: Exactly. It’s always been a very confrontational image. People find it sexist sometimes, because they think it’s very pornographic. I understood it differently. It’s called “The Origin of the World,” so I sort of see it as a force of giving life. Interpret however you like, the point is that Facebook couldn’t tolerate it and took it down. There’s a nice little bit of litigation where a schoolteacher had a page where he was teaching people that art, and Facebook could just not tolerate this art. In my mind, it was so telling that a communaut from hundreds of years before was basically revealing, as an expert troll almost, how conservatives—someone like Mark Zuckerberg—view, and how he shapes these platforms. And how they subtly reshape what we think is appropriate, what we think is free, what we think is within the realms of good society. And that you really do need artists telling you that that might not be true, and they’re some of the most effective actors at revealing that about those who hold power, like reshaping our understanding about what acceptable debate is, and how we can show power to be exercised in our online world, where in other circumstances it might be quite okay.

I love that story, and I love the communauts. There’s a lot of beautiful writing about them, there’s a beautiful book called Communal Luxury where they talk about all the different ways in which they were trying to reimagine their society and do it collectively, from things like having the first union of women but also having the design of clothes and furniture look different. I want to see a world in which people take that power in both the micro and macro and start to reshape their society in really creative ways. And I feel like digital technology has the real capability of allowing that to occur and I want to revive that sense of concrete democracy rather than just delegated democracy or deferred representative democracy where you tell someone else what you want but don’t have a say in a lot of decisions. And so, that really grassroots idea of democracy is something, and I think we’re in a world in which that could really occur with the assistance of digital technology. It’s a matter of working out how to bring it into being. And that’s what I see this movement as doing. People with digital rights as being their primary concern are trying to recreate that world so that there’s more communal, collective spaces for discussing what the future should look like.

Jillian C. York

📁 How ICE Got My Data | EFFector 38.8

2 days 20 hours ago

When we use the internet, we're entrusting tech companies with some of our most private information. These companies have promised they'll keep our data safe. But what happens when the government comes knocking at their doors? In our latest EFFector newsletter, we hear from an EFF client whose data was given to ICE after Google broke its promise to him.

JOIN OUR NEWSLETTER

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue covers the ongoing fight to reform NSA surveillance, the many attempts to censor 3D printing, and the cost of Google's broken promise to its users.

Prefer to listen in? EFFector is now available on all major podcast platforms. This time, we're chatting with EFF Senior Staff Attorney F. Mario Trujillo about how state attorneys general can hold Google accountable for failing to protect users targeted by the government. You can find the episode and subscribe on your podcast platform of choice:

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Feb78b9d6-fbcf-453f-b55e-77c575b638ef%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

Want to help us hold companies accountable? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight for privacy and free speech online when you support EFF today!

Christian Romero

EFF Sues DHS and ICE For Records on Subpoenas Seeking to Unmask Online Critics

2 days 22 hours ago
Agencies Ignored EFF’s Public-Records Requests Regarding Unlawful Efforts to Locate People Who Criticized the Government or Attended Protests.

SAN FRANCISCO – The Electronic Frontier Foundation (EFF) sued the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE) today demanding public records about their use of administrative subpoenas to try to identify their online critics.

Court records and news reports show that in the past year, DHS has used administrative subpoenas to unmask or locate people who have documented ICE's activities in their community, criticized the government, or attended protests. The subpoenas are sent to technology companies to demand information about internet users who are often engaged in protected First Amendment activity.

These subpoenas are dangerous because they don’t require judges’ approval. But they are also unlawful, and the government knows it. When a few users challenged them in court with the help of American Civil Liberties Union affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision.

DHS and ICE have ignored EFF’s public-records requests for documents about the processes behind these subpoenas, so EFF sued Wednesday in the U.S. District Court for the District of Columbia.

“DHS and ICE should not be able to first claim that they have the legal authority to unmask critics and then run from court when users challenge these administrative subpoenas,” said EFF Deputy Legal Director Aaron Mackey. “The public deserves to know what laws the agencies believe give them the power to issue these speech-chilling subpoenas.”

An administrative subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful.

EFF and the ACLU of Northern California in February ​wrote to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X​ to ask that they insist on court intervention and an order before complying with a DHS subpoena; give users as much notice as possible when they are the target of a subpoena, so the users can seek help; and resist gag orders that would prevent the companies from notifying users who are targets of subpoenas.

And EFF last week ​asked California’s and New York’s attorneys general to investigate Google​ for deceptive trade practices for breaking ​its promise​ to notify users before handing their data to law enforcement, citing the case of a doctoral student who was targeted with an ICE subpoena after briefly attending a pro-Palestine protest.

EFF in early March filed public-records requests with DHS and ICE for their policies, procedures, guidelines, directives, memos, and legal analyses supporting such use of administrative subpoenas. EFF also requested all Inspector General or oversight records, all approval and issuance procedures for the subpoenas, all records reflecting how many such subpoenas have been issued, all communications with technology companies concerning these demands, all communications regarding specific named targets or programs, and all communications with the Department of Justice regarding such subpoenas.

DHS and ICE have not responded, even though EFF requested expedited processing of its requests, which requires agencies to get back to requesters within 10 days.

“The policies, directives, and authorization records governing the program have not been disclosed,” the complaint notes. “The legal basis asserted by DHS and ICE for using a customs statute to compel disclosure of information about persons engaged in constitutionally protected speech and association has not been made public.”

For the complaint: https://www.eff.org/document/eff-v-dhs-ice-administrative-subpoenas-complaint

For EFF’s letter urging tech companies to protect users: ​https://www.eff.org/deeplinks/2026/02/open-letter-tech-companies-protect-your-users-lawless-dhs-subpoenas​

For EFF’s letter urging state probes of Google: ​https://www.eff.org/press/releases/eff-state-ags-investigate-googles-broken-promise-users-targeted-government​

Tags: free speechprivacyanonymityDHSICEContact:  AaronMackey Deputy Legal Director/Free Speech and Transparency Litigation Directoramackey@eff.org
Hudson Hongo

Copyright and DMCA Best Practices for Fediverse Operators

3 days 20 hours ago

People building the future of the social web — interoperable and decentralized — need to protect themselves against copyright liability. Like anyone who creates and operates platforms for user-uploaded content, the hosts of the decentralized social web can take preventive measures to reduce their legal exposure when a user posts material that violates someone’s copyright.

This post gives an overview of the steps to take. It’s meant for operators of Mastodon and other ActivityPub servers, Bluesky hosts, RSS mirrors, and other decentralized social media protocols, and developers of apps for those protocols — but it will apply to other hosts as well. This isn’t legal advice, and can’t substitute for a consultation with a lawyer about your specific circumstances. It focuses on U.S. law — the law may impose different requirements elsewhere. Still, we hope it helps you get started with confidence.

Why should I care? Copyright’s Sword of Damocles

In some circumstances, the operator of a platform that handles user content can be legally responsible for content that infringes copyright. That can happen when the platform operator is directly involved in copying or distributing the copyrighted material, when they promote or knowingly assist the infringement, or when they benefit financially from infringement while being in a position to supervise it. But these judge-made rules are often difficult and uncertain to apply in practice — and the penalties for being found on the wrong side of the law can be severe. Copyright’s “statutory damages” regime allows for massive, unpredictable financial liability. That’s why it’s important to limit your risk.

For Server Operators: Limiting Risk with the DMCA Safe Harbors

If you run a social network server, the safe harbor provisions of the Digital Millennium Copyright Act (DMCA) are an important way to limit your liability risk. The DMCA shields server operators from nearly all forms of copyright liability that can result from “storage at the direction of a user” — in other words, hosting user-uploaded content. But to qualify for this protection, there are steps a server operator has to take.

1. Designate A Contact To Receive Copyright Infringement Notices

First, you’ll need to provide contact information for someone who can receive infringement notices (a “designated agent”). That information needs to be posted in at least two places: on your server in a place visible to users (such as a “DMCA” page or post, or as part of your Terms of Service), and in the U.S. Copyright Office’s “Designated Agent Directory.” To post that information to the directory, you have to create an account at https://www.copyright.gov/dmca-directory/ and pay a small fee. The directory listings expire after three years, and once expired, your safe harbor protection goes away, so it’s important to keep that listing current.

2. Respond Promptly to Notices and Counter-notices

When you receive infringement notices, it’s important to respond to them promptly. Notices are supposed to identify the copyright holder, the copyrighted work they claim was infringed, and the post they claim is infringing. By deleting or disabling access to the posted material, you protect yourself from liability with respect to that material.

The theory behind Section 512 is that hosts don’t have to be in a position of deciding whether a post infringes someone’s copyright — it’s up to the poster, the rights holder, and potentially a court to decide that. A host who takes down posts whenever they receive an infringement notice is well-protected. But it’s equally important to recognize that hosts aren’t required to take down content in response to every notice. Infringement notices are frequently wrong, misguided, or abusive, or simply incomplete. Hosts who want to stand up for their users’ speech can choose to disregard infringement notices that seem suspect. While this risks losing the automatic protection of the safe harbor in each instance, it can still be done safely with careful preparation, ideally using a plan crafted with help from a lawyer. Bear in mind that people sending false notices, including by failing to consider whether a post is a fair use before asking a host to take it down, can be liable for damages under the DMCA.

The DMCA also allows the person who posted the material to send a “counter-notification” asserting that they really did have the right to post and that there’s no copyright infringement. Responding to counter-notifications is a good way for a host to demonstrate that they look out for their users. When a host receives a counter-notification, they should forward it on to the person who sent the original takedown notice and let them know that the post will be restored in 10 business days. Then, after that waiting period has elapsed, the host can restore the posted material. Just like with infringement notices, a host isn’t required to honor a counter-notification that appears to be fraudulent, but there’s no penalty for honoring it anyway.

3. Have A Repeat Infringer Policy

The next requirement is to have a policy of terminating the accounts of “subscribers and account holders” who are “repeat infringers” in “appropriate circumstances,” and to carry out that policy. Yes, that’s a vague requirement. It doesn’t require a “three strikes” policy or any other sports analogy. It just needs to be reasonable. Be sure your policy is spelled out in your website terms or “DMCA” page.

4. Don’t Ignore Known Infringement

Hosts need to take down user posts whenever the host actually knows that the post is infringing. In other words, a host isn’t protected if they ignore takedown notices based on technicalities in the notices, or if they learn about the infringement some other way. But hosts don’t need to actively look for infringement on their servers — only to act when someone notifies them.

5. Don’t Encourage Infringement

Finally, make sure that nothing you post or advertise actively encourages copyright infringement. For example, don’t post examples of users uploading copyrighted music or video without permission, or insinuate that your server is a good place for infringing content.

There are some other technicalities in the DMCA that can affect the safe harbor, which is why it’s always a good idea to consult with a lawyer. But following these steps will help protect you when you run a social media server — or any other kind of user-uploaded content platform.

Mitch Stoltz

Palantir Has a Human Rights Policy. Its ICE Work Tells a Different Story

4 days 14 hours ago

For years, EFF has pushed technology companies to make real human rights commitments—and to live up to them. In response to growing evidence that Palantir’s tools help power abusive immigration enforcement by ICE, we sent the company a detailed letter asking how the promises in its own human rights framework extends to that work.

This post explains what we asked, how Palantir responded, and why we believe those responses fall short. EFF is not alone in raising alarms about Palantir; immigrants' rights groups, human rights organizations, journalists, and former employees have raised similar concerns based on reports of the company's role in abusive immigration enforcement. We focus here on Palantir’s own human rights promises.

At the outset, we appreciate that Palantir was willing to engage respectfully, and we recognize that confidentiality and security obligations can limit what it can say. Nonetheless, measured against Palantir's own human rights commitments, its decision to keep powering ICE with tools used in dragnet raids and discriminatory detentions is indefensible. A good-faith application of those commitments should lead Palantir to end its contract with ICE, and refuse new, or end current, contracts with any other agency whose work predictably violates those commitments.

Palantir’s Public Promises

Palantir has long said it performs comprehensive human rights analysis on its work. It has also worked with ICE for years, apparently in a more limited capacity than today. It has publicly embraced the UN Guiding Principles on Business and Human Rights, the Universal Declaration of Human Rights, and the OECD Guidelines for Multinational Enterprises. Additionally, in its response to EFF, Palantir says its legal responsibilities are only “the floor” for broader risk assessments.

That was the point of our letter. We asked what human rights due diligence Palantir conducted when it first contracted with ICE and DHS; whether it performed the “proactive risk scoping” it advertises, how it reviews work over time, what it has done in response to reports of misuse, and whether it has used “every means at [its] disposal”—including contract provisions, third‑party oversight, and termination—to prevent or mitigate harms.

For the most part, Palantir did not answer our accountability questions. It did correct one point: Palantir says it does not currently work with CBP, and available evidence supports that, though it also made clear it could work with CBP in the future.

Palantir also raised a red herring it often deploys in response to criticism. It denied building a 'mega' or 'master' database for ICE and denied creating a database of protesters, which some ICE agents have claimed to have been built. We call it a red herring because those denials sidestep the central issues: what capabilities Palantir's tools actually provide to ICE.

To be clear, EFF has never claimed that Palantir is building a single centralized database. Our concern is grounded in how Palantir’s tools allow ICE to query and analyze data from multiple databases through a unified interface—which from an agent’s perspective can be a distinction without a difference.

In the sections that follow, we compare Palantir’s account of its work for ICE with evidence about how its tools seem to be used, and explain why legality, internal process, and sustained “engagement with the institutions whose vital tasks exist in tension with certain human rights” are no substitute for real human rights due diligence—because respect for human rights must be measured by outcomes, not just process.

Palantir’s ICE Work Undermines Its Own Standards

Palantir says ICE uses its ELITE tool for “prioritized enforcement”: to surface likely addresses of specific people, such as individuals with final orders of removal or high‑severity criminal charges. But according to sworn testimony in Oregon, ICE agents use ELITE to determine where to conduct deportation sweeps, and the system “pulled from all kinds of sources” to identify locations for raids aimed at mass detentions, including information from the Department of Health and Human Services such as Medicaid data. A leaked ELITE user guide for 'Special Operations' also instructs operators to disable filters to "display all targets within a Special Operations dataset." Those details directly conflict with Palantir’s narrow description of ELITE’s role.

Additionally, Palantir's response leans on legal authority and the Privacy Act. But it does not identify any specific lawful basis for using Medicaid data in this way or explain how its software enables that access. Even if a legal theory exists, turning sensitive medical information into fuel for dragnet sweeps is hard to reconcile with its commitments to privacy, equity, and the rights of impacted communities. Its own human rights framework requires grappling with foreseeable harms its products may enable, not just invoking possible legal authorization.   

Reporting shows that many people detained by ICE had no criminal record, much less a serious one, and in many cases no final order of removal. An overwhelming percentage of those detained were, or appeared to be, from Central and South America, and nearly one in five ICE arrests were street arrests of a Latine person with neither a criminal history nor a removal order.

These facts raise obvious questions about discriminatory impact, racial profiling, and whether Palantir's tools are facilitating detention practices far broader than the company claims. Palantir's response does not meaningfully engage those questions, despite the company's commitments to non-discrimination and due process.

EFF’s letter asked Palantir to explain how it is honoring its commitments to civil liberties in light of reports linking Palantir-owned systems to facial recognition and other tools used to identify and target people engaged in observing and recording law enforcement, including in connection with the deaths of Renée Good and Alex Pretti. The letter also cites an incident in which an officer scanned protesters’ and observers’ faces and threatened to add their biometrics to a “nice little database.” Palantir’s response denies involvement in any such database.

A narrow denial about a single database does not answer the broader question: if ICE, its customer, claims it has this capability, what has Palantir done to ensure its tools are not used to chill protected speech, retaliate against observers, or facilitate targeting of people engaged in First Amendment‑protected activity? For a company that claims to value democracy and civil liberties, this is not a marginal issue; it goes to the heart of its human rights commitments.

Legality, Process, and Engagement with ICE Are Not Human Rights Standards

As mentioned above, Palantir leans heavily on legal compliance. It says government data sharing is “subject to, and governed by, data sharing agreements and government oversight” and that any sharing it facilitates is done according to “legal and technical requirements, including those of the Privacy Act of 1974.” It describes its role in ELITE as “data integration,” enabling ICE “to incorporate data sources to which it has access,” including data shared under inter‑agency agreements.

EFF is very familiar with the Privacy Act—we are suing the Office of Personnel Management over it currently. But Palantir’s response does not clarify how ICE legally has access to this information, how Palantir ensures that it follows those legal processes, or how Palantir’s software may have enabled access in the first place. More critically, that is still a legal answer to a human rights question, and legal compliance alone is insufficient as a human rights standard.

Human rights due diligence requires assessing foreseeable harms, responding to credible evidence of abuse, and changing course when the facts demand it—something Palantir, on paper, recognizes. That’s why it stresses that its legal responsibilities are only “the floor for [its] broader risk assessments,” pointing to the way it built toward GDPR‑style data protection principles and incorporated international humanitarian law principles before those requirements were formalized. If those commitments mean anything, Palantir has to explain how specific practices—like enabling ICE to use Medicaid data in dragnet raids—square with that broader standard.

Palantir also leans heavily on process. It points to a “layered approach” to risk, frameworks that purportedly examine multiple dimensions of privacy and equity, and “indelible” audit logs that track how its tools are used. Audit logs are not sufficient for protecting human rights. There is a long history of authoritarian regimes keeping extensive logs of their human rights abuses. Those structures can be useful for protecting human rights, but only if they are used to detect harm, trigger reassessment, and lead to changes in design, access, support, or contract enforcement when credible reports of abuse emerge.

That is why we pressed Palantir to spell out clearly what reports of misuse Palantir has received, what changes it made, and on what timeline. Again, instead of offering specific examples, Palantir points back to its internal framework and its willingness to “move towards the hardest problems” as evidence of effective efforts. But human rights are an outcome, not just a process.

Human rights due diligence is not a one-time approval at contract signing; under the UN Guiding Principles, it is supposed to be continuous, with new facts triggering reassessment. Complaints, media reports, leaks, litigation, and sworn testimony are exactly the kinds of events that should prompt review. If Palantir has an account for that work— how often it reviews ICE contracts, who conducts the reviews, what triggers them, and how findings reach the Board— it had every opportunity to describe it. Instead, it offered a generic assurance that it remains committed to human rights without engaging in the specifics. Confidentiality may sometimes limit disclosure, but it is no substitute for accountability.

What Needs to Happen Next 

Palantir wants credit for “mov[ing] towards the hardest problems” and engaging with institutions whose missions it says are “in tension with certain human rights” while having a human rights framework. But when the record includes violent raids, dragnet detentions, use of sensitive medical data, discriminatory targeting, retaliation against observers, and deaths tied to immigration enforcement operations, pointing to a values page is not enough; it has to reckon with the results.

Voluntary corporate human rights policies often function as weak accountability mechanisms: companies can tout principles, publish policies, and answer criticism with polished statements while changing very little on the ground. Palantir’s response fits that pattern all too well. EFF will continue to challenge its role in abusive immigration enforcement and demanding more accountability for technology vendors whose tools enable human rights violations. We are also happy to continue a dialogue with Palantir to that end. For now, this much is clear: Palantir needs to reconsider its contract with ICE and with all agencies whose work predictably violate human rights.

Cindy Cohn

The Internet Still Works: Reddit Empowers Community Moderation

4 days 17 hours ago

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

Reddit is one of the largest user-generated content platforms on the internet, built around thousands of independent communities known as subreddits. Some subreddits cover everyday interests, while others host discussions about specialized or controversial topics. These communities are created and moderated by volunteers, and the site’s decentralized model means that Reddit hosts a vast range of user speech without relying on centralized editorial control. 

Ben Lee is Chief Legal Officer at Reddit, where he oversees the company’s legal strategy and policy work on issues including content moderation and intermediary liability. Before joining Reddit, Lee held senior legal roles at other tech companies including Plaid, Twitter, and Google. At Reddit, he has been closely involved in litigation and policy debates surrounding Section 230, including cases addressing the legal risks faced by platforms and their users and moderators. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.

Joe Mullin: When we talk about user rights and Section 230, what rights are most at stake on a platform like Reddit? 

Ben Lee: Reddit, we often say, is the most human place on the internet. What’s often missing from the debate is that section 230 protects people—not platforms. 

It protects millions of everyday humans and volunteer moderators who participate in online communities. Without it, people could face lawsuits for voting down a post, enforcing community rules, or moderating a discussion. These are foundational activities on Reddit, and frankly, the whole internet.

If you had to describe section 230 to a regular Reddit user without naming the law, what would you say it does for them?

Section 230 protects your ability to participate in community moderation.

Even if all you are doing is up-voting or down-voting content, that’s participation. On Reddit, everyone is a content moderator, through voting. Up-voting determines the visibility of content. 

We believe, strongly, this is one of the only models to allow Reddit to scale. You make the community part of the moderation process. They’re invested in the community, making it better. 

How would user speech be affected if Section 230 were eliminated or weakened? 

We would undermine community self governance—the notion that humans can do content moderation, and take that responsibility for themselves. Whether you’re a small blog or big forum. I like to think of Reddit as composed of this federation of communities that range from the tiny to the humongous. That’s what the internet is! 

The legal risk would discourage people from moderating, or even speaking at all. The kind of speech we’re trying to protect is often critical of powerful people or entities. If a moderation decision leads to litigation from those powerful entities, that’s an expensive proposition to fight. 

Reddit relies on user-run communities and volunteer moderators. Can you walk me through how content moderation and legal complaints actually work in practice, and where section 230 comes into that? 

We have a tiered structure, like our federal system. Each community is like a state: it has its own rules, and enforces them. The vast majority of content moderation decisions are made by the communities, not by Reddit itself. 

Reddit is built on self-governing communities that are moderated by volunteers, supported by automated tools. Section 230 gives Reddit the freedom to experiment, and lets users shape healthy, interest-based spaces.

Section 230 is fundamental to protecting the moderators from a frivolous lawsuit. A screenwriting community might want to protect their community from scammy competitions—and then they get sued by that competition. 

Or a community wants to keep their conversation civil. And, for example, may not allow Star Trek characters to be called “soy boys,” and they enforce that. Then a person sues. 

I wish these were hypotheticals. But they were actual lawsuits. And we have them, routinely. 

What are policymakers missing about Section 230? 

The [moderation] decisions being criticized in court, are decisions to try to make the internet safer. In none of the cases that I mentioned is there a moderator saying, “I want to increase harmful content!” These are good-faith decisions about what makes the internet better. 

Section 230 is, at its core, protecting the ability for people to make those choices for their own communities. 

There's a price to be paid for not having a Section 230. And it will be paid by internet users—not the biggest platforms.

Some see 230 as a way to punish Big Tech. But removing it doesn't punish Big Tech—it makes them more powerful. It's startups, community driven platforms, and individual moderators who rely on Section 230 to compete and innovate. Weakening Section 230 will harm the open internet, and reduce the choice, diversity, and resilience of the internet. 

The big guys, they have armies of lawyers. They have the budget to withstand a flood of lawsuits. Weakening Section 230 just entrenches them. 

In Reddit’s amicus brief in the Gonzalez v. Google Supreme Court case, you point out that without Section 230, many moderation decisions wouldn’t be protected. The brief states: “A plaintiff might claim emotional distress from a truthful but hurtful post that gained prominence when a moderator highlighted it as a trending topic. Or, a plaintiff might claim interference with economic relations arising from an honest but very critical two-star restaurant review.” 

When you have situations where moderators get threats or litigation, what can you do? 

We have had cases where our own moderators got sued, along with us. In the “soy boy” case, we worked to help find pro bono counsel for the moderators. 

Someone posted “Wesley Crusher is a soy boy,” and it got removed. I'm enough of a Star Trek fan that I understand both the reference, and why the moderator decided—“hey, it's gone. I don't want this here.”

This would not violate our Reddit rules. But the community took it down under its own rules about being civil. It was just not a kind-hearted action, and the community had a right to decide. 

But the moderator got sued. We got sued, actually, because the poster disagreed with that moderation choice. Section 230 is what allowed us to win that case. 

These are just average people, implicated only because they moderated their own community. They are trying to do the right thing by their community. 

In cases where litigation happens, when does Section 230 come into play? 

Section 230 is usually one of the first things that's talked about in the case. It’s usually the most effective way of saying: if you believe someone who defamed you—please go to the person who has defamed you. If you’re looking to the moderator, or to Reddit itself, this is not a great way of getting the justice that you seek. 

Is there a different workflow internationally? 

There’s a very different workflow. We had a prominent case in France where a company was trying to sue moderators, and of course, we didn't have section 230 to protect them. So we had to do all sorts of other things to protect them. It got much more complicated. 

The breadth of content that's considered illegal in certain jurisdictions can be somewhat breathtaking. 

Our goal is always to preserve as much freedom of expression as possible for our community. In the U.S., we look at it through the lens of the First Amendment, and other aspects. Outside the U.S., we rely more on the lens of international human rights. 

How would you characterize legal demands around user content, the ones you see most often? 

They tend to be: somebody said something mean about me—take this down. Or someone says: you didn’t allow me to say something mean about someone or some entity. It completely runs the spectrum. 

One law that has already passed that weakens Section 230 is SESTA/FOSTA. From Reddit’s perspective, what changed after that? 

There's some communities we had to shut down, in particular, support communities. There was a cost. Every time Section 230 is narrowed, there’s a cost—some types of speech and communities have a harder time staying online. 

The cost may not seem high to some people, because those communities are not for them. But if they visited them, they’d see that these are actual people, interacting in a positive way. If it wasn’t positive, we have rules for that—but that’s a different question. 

Joe Mullin

Keep Pushing: We Get 10 More Days to Reform Section 702

1 week ago

In a dramatic middle-of-the-night stand off, a bipartisan set of lawmakers pushing for true reform and privacy protections for Americans bought us some more time to fight! They are holding out for, at a minimum, the requirement of an actual probable cause warrant for FBI access to information collected under the mass spying program known as 702.


A reauthorization with virtually no changes was defeated because a core group of lawmakers held strong; they know that people are hungry for real reform that protects the privacy of our communications. We now have a 10-day extension to continue to push Congress to pass a real reform bill. 


The Lawmakers rallied late Thursday night to reject a proposed amendment that made gestures at privacy protections, but it would not have improved on the status quo and would have reauthorized Section 702 for five more years to boot. 

Take action

TELL congress: 702 Needs Reform

Section 702 is rife with problems, loopholes, and compliance issues that need fixing. The National Security Agency collects full conversations being conducted by and with targets overseas – including by and with Americans in the U.S. –  and stores them in massive databases. The NSA then allows other agencies, including the Federal Bureau of Investigation, to access untold amounts of that information. In turn, the FBI takes a “finders keepers” approach to this data: they reason that since it's already collected under one law, it’s OK for them to see it. 

Under current practice, the FBI can query and even read the U.S. side of that communication without a warrant. What’s more, victims of this surveillance  won’t even know and have very few ways of finding out that their communications have been surveilled. EFF and other civil liberties advocates have been trying for years to know when data collected through Section 702 is used as evidence against them.  

Reforming Section 702 is even more urgent because of revelations hinted at by Senator Ron Wyden’s public statements concerning a “secret interpretation” of the law that enables surveillance of Americans, and a public  “Dear Colleague” letter he sent to fellow Senators about FBI abuse of Section 702. 

That’s right—the way the government conducts mass surveillance is so secret and unaccountable even the way they interpret the law is classified. 

 “In many cases these will be law-abiding Americans having perfectly legitimate, often sensitive, conversations,” Wyden wrote. “These Americans could include journalists, foreign aid workers, people with family members overseas - even women trying to get abortion medication from an overseas provider. Congress has an obligation to protect our country from foreign threats and protect the rights of these and other Americans.” 

We have 10 days to make it clear to Congress: 702 needs real reforms. Not a blanket  reauthorization. Not lip service to change. Real reform.

Take action

TELL congress: 702 Needs Reform

Matthew Guariglia

Stop New York's Attack on 3D Printing

1 week 1 day ago

New York's proposed 2026-2027 budget currently includes provisions that will require all 3D printers sold in the state to run print-blocking censorware—software that surveils every print for forbidden designs. This policy would also create felony charges for possessing or sharing certain design files. The vote on the state budget could happen as early as next week, so New Yorkers need to act fast and demand that their Assemblymembers and Senators strip this provision from the budget.

Take action

Tell Your Representative to Stand with Creators

State legislators across the US are rushing to regulate 3D-printed firearms under the syllogism “something must be done; there, I've done something.” The most reckless of these proposals is a mandate for manufacturers to implement print blocking on all 3D printers. We, and other experts, have already pointed out that this algorithmic print blocking is simply unfeasible and will only serve to stifle competition, free expression, and privacy. While most detrimental to the creative communities lawfully using these printers, every New Yorker will be impacted by this blow to innovation.

This policy is unfortunately buried in Part C of the New York State’s proposed budget for the 2026-2027 fiscal year (S.9005 / A.10005), which is urgently moving toward a vote after facing extensive delays. It’s also bundled with a policy that would allow felony charges to be brought against researchers and journalists for sharing design files restricted by the state.  The worst of these impacts won’t be known until after it is negotiated behind closed doors, with no safeguards for creative expression or privacy.

Researchers and Journalists Could Face Felony Charges

Part C Subpart A of the budget includes two particularly concerning provisions: §2.10 and 2.11. These threaten Class E felony charges for distributing or possessing 3D-printer files that would produce firearm parts with a 3D printer or CNC machine. 

Under these provisions merely sharing a print file with any of them could result in criminal charges

The first provision, 2.10, makes it a felony to sell or distribute files that can produce major firearm components to someone who is not a federally and NY-licensed gunsmith. Under 2.11, it’s also a felony to possess these files if you intend to illegally print a firearm or share them with someone you believe is not permitted to own or smith a firearm.

A journalist reporting on 3D-printed guns. A researcher studying printable firearms. An artist incorporating parts into a new work commenting on gun culture. Under these provisions merely sharing a print file with any of them could result in criminal charges, even if no one involved intends to assemble a firearm.

Criminalizing information doesn’t work. Someone intent on illegally printing a firearm is already subject to charges for that act. Adding felony liability for simply possessing a file or design piles on additional charges while doing nothing to stop printing. New charges for someone distributing these files won’t make them inaccessible to lawbreakers, but they will have a chilling effect on legitimate and entirely legal work. 

Unsurprisingly, a similar law was proposed and subsequently scrapped in Colorado due to First Amendment concerns. We recommend New York do the same.

Take action

Tell Your Representative to Stand with Creators

Mandated Surveillance, Less Access

Part C Subpart B would require every 3D printer and CNC machine sold in New York to include algorithms that scan your design files and block prints the system identifies as producing firearm components. Furthermore, all sales and deliveries of these machines must be made face-to-face. 

Unlike other bills we have seen, there are no exceptions to this mandate. These restrictions apply to sales to researchers, commercial manufacturers, and—oddly enough—federally and state-licensed gunsmiths.

Applying these restrictions to CNC machine sellers is particularly absurd. These cousins of 3D printers, which make 3D objects by removing materials, are often tens of thousands of dollars and used by commercial manufacturers. Automotive, aerospace, medical manufacturers, and many others industries will be subject to the in-person sales, surveillance risk, and all the other problems with these print-blocking algorithms introduce.

Industries will be subject to the in-person sales, surveillance risk, and all the other problems

Even limiting the focus to individual buyers—hobbyists and artists who use these machines at home—this restriction to face-to-face sales comes with its own issues. Beyond unnecessarily complicating the use of printers in the state, this barrier to access will hit rural New Yorkers the hardest. People in rural or remote locations can stand to benefit from the saved time and costs of printing useful parts at home. With this restriction, they will need to drive to one of the few retailers who actually sell this equipment and settle for the models they stock. 

That is, if sellers continue to stock these printers despite the risk. Subpart B §§ 2.3 and 2.5 open sellers up to liability, including anyone on the second-hand market, for selling out-of-date printers. Meanwhile, buyers hoping to illegally print firearms can simply build their own printer with widely available equipment.

The Law Won’t Work as Advertised 

Here’s what makes Subpart B of the New York budget particularly reckless: the technology it mandates is not capable of doing what it is supposed to. 

There is very little detail provided about requirements for the mandated algorithms. What the bill does outline boils down to this: the algorithms must evaluate print files to determine whether they would produce a firearm or illegal firearm parts, and if so, block the print. In an attempt to enable this, New York state would also create and maintain a library of forbidden files with tightly restricted access. 

We’ve already gone over why this idea simply won’t work. Design files are trivially easy to modify, split into segments, or otherwise alter to evade pattern detection. Even if printers fully rendered and analyzed the print with cloud-based AI, any number of design or post-print tricks can be used to dodge detection. Meanwhile, such fuzzy AI interpretation will rapidly increase the percentage of lawful prints censored. 

Firearms aren’t a highly specific design like paper currency; these proposed algorithms are futilely attempting to block an infinite number of designs capable of—or that can be made capable of—the few simple mechanical functions that make up a firearm. 

This group has no peer review requirements, so it could easily be loaded with profiteers or incumbent manufacturers

As we’ve said before: the internet always routes around censorship. Anyone determined to print a prohibited object has straightforward workarounds. The people who get surveilled and blocked are the people trying to follow the law.

The bill aims to enforce this impossible mandate by creating a working group to define the actual technical requirements of enforcement—but only after the law passes. This group has no peer review requirements, so it could easily be loaded with profiteers or incumbent manufacturers who are already lining up to participate. These incumbents stand to profit from shutting out new competitors and locking in users to their devices, and sellers into their platform, subjecting both to the type of enshittification seen with Digital Rights Management (DRM) software. There are also no safeguards in the law to prevent the most surveillance-heavy approaches to print scanning, or to stop this censorship infrastructure from being further weaponized against lawful speech.

On the other hand, unbiased experts in open-source manufacturing in the working group can at best pause the clock by showing such algorithms are unfeasible. That is, until a new snake oil company comes along to restart it. 

New York Won't Be the Last Stop 

New York is one of the largest consumer markets in the country. When it mandates a feature in hardware, manufacturers hardly ever build a New York-only version. They build the New York version and sell it globally. A print-blocking mandate adopted in New York will become the national standard in practice.

New Yorkers deserve more than this rush job buried in a budget bill. This is an unfeasible tech solution, built without the consumer protections that would be required of any serious policy proposal, and creates new costs and inconveniences amidst a protracted annual budget process. It also threatens First Amendment protections. This policy will take shape without consumer guardrails, behind closed doors, and risks the worst outcomes for grassroots innovation and creativity enabled by these machines. Worse still, these practices can become the norm across other states and among 3D-printer manufacturers worldwide. 

Your representatives could vote on this ill-conceived measure in the next week.  If you're a New Yorker, email your legislators now, and tell them to strip this measure from the budget today. 

Take action

Tell Your Representative to Stand with Creators

Rory Mir

How Push Notifications Can Betray Your Privacy (and What to Do About It)

1 week 1 day ago

Update April 22, 2026. Apple has reportedly addressed part of the issue with the notification database in iOS 26.4.2 and 18.7.8, released today. With this update, notifications marked for deletion should no longer be stored in the notification database.

A phone’s push notifications can contain a significant amount of information about you, your communications, and what you do throughout the day. They’re important enough to government investigations that Apple and Google now both require a judge’s order to hand details about push notifications over to law enforcement, and even with that requirement Apple shares data on hundreds of users. More recently, we also learned from a 404 Media report that law enforcement forensic extraction tools can unearth the text from deleted notifications, including those from secure messaging tools, like Signal. The good news is that you can mitigate some of this risk. 

There are two points where notifications may betray your privacy: when they’re transmitted over cloud servers and once they land on the device. Let’s start with the cloud. It might seem like push notifications come directly from an app, but they are typically routed through either Apple or Google’s servers first (depending on if you use iOS or Android). According to a letter sent to the Department of Justice by Senator Wyden, the content of those notifications may be visible to Apple and Google, and at the very least the companies collect some metadata about what apps send a notification and when. App providers have to make the decision to hide the content from Apple and Google and implement that functionality; Signal is one app that does this. 

Then, once the notifications land on your phone, depending on your settings, the notification content may be visible on your lock screen without needing to unlock the device. This can be dangerous if you lose your device, someone steals it, or it’s confiscated by law enforcement. 

You may clear notifications after looking at them. But it turns out the content notifications get recorded in your device’s internal storage, which then makes them susceptible to recovery with certain types of forensic tools. Notification content may even persist after the app is deleted, if the OS doesn’t fully purge the app’s notification data. 

We still have a lot of unanswered questions about how the notification databases work on devices. We do not know how long notifications are stored, or whether they’re backed up to the cloud, in which case the cloud provider could get backdoor access to the content of messages if the backups are enabled and not end-to-end encrypted. This may also make backups vulnerable to law enforcement demands for data. 

Which is all to say that there are myriad ways that law enforcement can access the content or metadata of push notifications. Let’s fix that.

Consider the Strongest Notification Protections for Your Secure Messaging Apps

Secure chat tools are designed to keep the content of the messages safe inside the app. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the content of your messages, and they’re only accessible on your and your recipients’ devices. Once messages land on a device, it’s still important to consider some privacy precautions, particularly with notifications. 

Signal
Signal offers three levels of information to include in notifications, all which are pretty self explanatory:

  • Name, Content, and Actions (Name and message on Android) shows the entirety of a message as well as who sent it (on iPhone you can also slide to reply, mark as read, or call back). 
  • Name only only shows the name of the sender. 
  • No Name or Content (No name or message on Android) will only show that you have a message from Signal, not who sent it or what it’s about. 

To change your settings:

  • On iPhone: Tap your profile picture, then Settings > Notifications > Show.
  • On Android: Tap your profile picture, then Notifications > Show

WhatsApp
WhatsApp only has one option for this, and it’s currently limited to iPhone, but you can at least tell the app not to include the content of a message in the notification:

  • Open WhatsApp for iPhone, tap the “You” bar, then Notifications, and disable the Show preview option.

Check your other apps to see if they offer similar settings.

Limit Your Notifications Device-Wide

Since Apple and Google manage push notifications for their respective devices, they also have some visibility into certain data. Push notification data can include certain types of metadata, like which app sent a notification and when, as well as the account ID associated with the phone. In some cases, Apple and Google may have access to unencrypted content, including the content of the text in a notification or other information from the app itself. 

For most app notifications, there’s no simple way to easily figure out what metadata might be gleaned from a notification, or if the notification is unencrypted or not. But some app developers have described details along these lines. For example, Signal president Meredith Whittaker explained on social media how the Signal app handles notifications entirely on-device. Searching online for an app name along with “notification privacy,” “notification encryption” or “notification metadata” may help answer your questions, or you may need to dig around in support forums for the app.

It’s also good to reconsider whether any app should be sending you notifications to begin with. Aside from a potential decrease in the number of distractions you endure throughout the day, or the level of chaos on display on your lockscreen, limiting the apps that can send notifications and what content is visible in them can improve your privacy with respect to the sorts of metadata that may be gathered by the companies, as well as any content that may be viewable if someone has physically accessed your device.

To check and change your settings on iPhone

  • Open Settings > Notifications.
  • On the Show Previews option, you can choose whether to show the content of notifications on the lock screen, “Always,” which doesn’t require unlocking the device, “When Unlocked,” which does, and “Never,” which means notifications won’t have any details, just that you have a notification in an app. 
  • Alternatively, you can scroll down and change these settings per app. Just tap the app name, then the Show Previews menu, and choose how you’d like them to appear. Or, if you’ve decided you don’t want notifications from that app at all, uncheck the Allow Notifications option.

To check and change your settings on Android
The core version of Android relies on app developers to develop specific settings more than controlling them on a platform-wide level.

  • Open Settings > Notifications > App notifications to disable notifications from any app completely. Some apps may also offer internal notification options for specific types of notices, like new messages, that you can control in the app itself. Tap an app name, then tap the Addition settings in the app option to potentially customize it more.
  • You can also experiment with the sensitive content setting. This is up to the developer to set properly, but when done so, most notifications will require at least unlocking the device to see them. Open Settings > Notifications > Notifications on lock screen and disable “Show sensitive content.”
Control What Notifications AI Tools Can Access

In an attempt to make notifications easier to skim, both Android and iOS offer optional ways to get notification summaries using their AI tools that summarize the content of notifications. On an individual app level, WhatsApp offers this as well. Some of these summarization tools, like Apple’s, run on the device, while others, like WhatsApp’s, do not. This can all be a lot to keep track of, and sending data off device may create some level of risk for some messages.

Since this is a bit more complicated, we have another blog post that walks through the steps to take to protect messaging from accidentally ending up in AI tools built into Apple and Google's devices. For WhatsApp specifically, we have a blog detailing when you might want to turn on the app’s “Advanced Chat Privacy” feature, which can disable summaries for both yourself and others in the chat.

Balancing security, privacy, and usability with something like push notifications is a complicated task. At the very least, Apple and Google should better ensure that the content of these notifications isn’t transmitted over their servers in plain text. The companies need to also make sure that device operating systems don’t back up the notification database to the cloud, and when an app is deleted, that all notification data is purged.

We appreciate that apps like Signal allow you to control what’s visible with notifications on a per-app basis, and we’d like to see this level of granularity of choices in other secure messaging tools, like WhatsApp. Likewise, more apps should handle push notifications similarly to the way Signal does, where a ping is sent to wake up the app to check for messages, and the content of that message is never sent across servers.

Thorin Klosowski

EFF Calls on Kuwait to Release Journalist Ahmed Shihab-Eldin

1 week 2 days ago

EFF calls on the Kuwaiti government to immediately release journalist Ahmed Shihab-Eldin. An award-winning journalist and television host who worked for Al Jazeera for many years, Shihab-Eldin—a dual American-Kuwaiti citizen—was arrested in Kuwait on March 3 while visiting family. The Committee to Protect Journalists (CPJ) reported yesterday that it is believed he has been charged with spreading false information, harming national security, and misusing his mobile phone.

According to the Guardian, Shihab-Eldin published footage of a U.S. Air Force F-15 E Strike Eagle crash, and posted to his Substack about the incident, noting that video circulating online showed local residents assisting the crash survivors. 

Kuwait is one of several countries that has recently cracked down on reporting amidst the ongoing war. Kuwait’s Ministry of Interior posted on X on March 3—the same day Shihab-Eldin was arrested—warning people in the country “not to photograph or publish any clips or information related to missiles or relevant locations.” Earlier this month, the UN Office of the High Commissioner for Human Rights (OHCHR) highlighted a new decree in Kuwait banning the circulation of reports that seek to “undermine the prestige of the military” or erode public trust in it. 

As reported by local media, the decree states that “those who intentionally publish statements or news or circulate false reports and rumors about military authorities resulting in weakening the trust in them and their morale, in addition to undermining their prestige, are punishable by three to 10 years in jail and a fine between KD 5,000 and 10,000.” The decree also imposes a penalty ranging from seven years to life imprisonment for “authorized people who cause financial loss or damage to the military authorities while carrying out a transaction, operation, project or case or obtaining any profit from such deals.”

In contrast to neighboring Gulf states, Kuwait has historically allowed the press to operate with relative freedom, and even introduced a law in 2020 protecting the right to access information. In practice, however, the government exercises considerable control over the media. Furthermore, there are several laws, including cybercrime legislation introduced in 2016, that restrict freedom of expression.

EFF is deeply concerned that Ahmed has not been seen nor heard from in nearly six weeks. We call on the government of Kuwait to immediately release Ahmed Shihab-Eldin. 






Jillian C. York

Digital Hopes, Real Power: The Rise of Network Shutdowns

1 week 3 days ago

This is the fourth installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the rest of the series here.

Iran’s internet has been intermittently disrupted for months. After years of bombardment, Gaza’s telecommunications infrastructure remains fragile. In India, recurring shutdowns and throttling have become a routine response to protests and unrest, cutting millions off from news, work, and basic services. Across dozens of other countries, governments increasingly treat connectivity itself as something that can be weaponized—cut, slowed, or selectively restored to shape what people can see, say, and share. In 2024 alone, authorities imposed 304 internet shutdowns across 54 countries—the highest number ever recorded.

In 2011, when protesters in Tunisia, Egypt, and beyond used social media to broadcast their uprisings to the world, many observers heralded a new era of networked freedom. Governments, however, responded quickly by developing and refining systems of control that have only grown more sophisticated over time. Today’s landscape of regulation, blackouts, and degraded networks reflects that trajectory, as early experiments in censorship and disruption have hardened into a durable system of control—what began as an emergency measure has become a normalized infrastructure of control.

A Brief History of Internet Shutdowns

Egypt’s 2011 internet shutdown wasn’t the first. Although the government’s heavy-handed response after just two days of protests caught the world’s attention, Guinea, Nepal, Myanmar, and a handful of other countries had previously enacted shutdowns. But Egypt marked a turning point. In the years that followed, shutdowns increased sharply worldwide, suggesting that governments had taken note—adopting network disruptions as a tactic for suppressing dissent and limiting the flow of information within and beyond their borders.

On January 28, 2011, at 12:34 a.m. local time, five of Egypt’s internet service providers (ISPs) shut down their networks. At least one provider—Noor, which also hosted the Egyptian stock exchange—remained online, leaving only about 7% of the country connected. 

In the aftermath of President Hosni Mubarak’s resignation, rights groups sought to understand how such a sweeping shutdown had been possible—and how future incidents might be prevented. There was no centralized “kill switch.” Instead, authorities leveraged the country’s highly consolidated telecommunications sector, which all operate by government license. With only a handful of ISPs, a small number of directives was enough to bring most of the network offline.

In the years following Egypt’s 2011 shutdown, telecommunications companies—many of which had been directly implicated in enabling state-ordered disruptions—began to organize around a shared set of human rights challenges. Beginning that same year, a group of operators and vendors quietly convened to examine how the UN Guiding Principles on Business and Human Rights applied to their sector, particularly in contexts where government demands could translate into sweeping restrictions on access. By 2013, this effort had formalized into the Telecommunications Industry Dialogue, bringing together major global firms to develop common principles on freedom of expression and privacy and, through a partnership with the Global Network Initiative, engage more directly with civil society. The initiative reflected a growing recognition that telecom companies—unlike platforms—operate at a critical chokepoint in the network. But it also underscored the limits of voluntary approaches: while the Dialogue helped establish shared norms, it did little to constrain the legal and political pressures that continue to drive shutdowns—or to prevent companies from complying with them.

From Emergency Measure to Legal Authority

If the early aughts were defined by improvised shutdowns, the years since have seen governments formalize their power to control networks. What was once exceptional is now often embedded in law.

In India, the 2017 Temporary Suspension of Telecom Services Rules—issued under the Telegraph Act—provided a clear legal pathway for cutting connectivity. The Telecommunications Act, 2023, further entrenched the government’s ability to enact shutdowns, granting the central and state governments, or “authorised officers” the power to suspend telecommunications services in the interest of public safety or sovereignty, or during emergencies. The government has used these measures repeatedly, particularly in Jammu and Kashmir. India’s Software Freedom Law Centre’s Shutdown Tracker shows India as instigating more than 900 shutdowns, 447 of which were in Jammu and Kashmir.

In Kazakhstan, shutdowns have also become common. Over the years, the government has passed legislation that allows state agencies to shut down the internet. The 2012 law on national security enabled the government to disrupt communications channels during anti-terrorist operations and to contain riots. In 2014 and 2016, laws were further amended to expand the number of actors able to shut down the internet without a court decision, and a government decree in 2018 enabled shutdowns in the event of a “social emergency.” 

Elsewhere, governments have built or expanded legal and technical frameworks that enable similar control over information flows. Ethiopia’s state-dominated telecom sector has facilitated sweeping shutdowns during periods of conflict, including the war in Tigray, where the internet was disconnected for more than two years. In Iran, authorities have developed regulatory and infrastructural capacity to isolate domestic networks from the global internet, allowing them to restrict external visibility while maintaining limited internal connectivity. This year alone, Iranians have spent one third of the year offline. And amidst the ongoing war, Iranian officials have made it clear that the internet is a privilege for those who toe the government’s official line.

Even where laws do not explicitly authorize shutdowns, broadly worded provisions around national security or public order are routinely used to justify them. The result is a growing legal architecture that treats network disruptions not as extraordinary measures, but as standard tools for managing populations.

When that authority is exercised over a population beyond a state’s own citizens, the consequences can be even more severe. Israel’s Ministry of Communications controls the flow of communications in and out of Palestine and has used that power to shut down internet access during periods of conflict. Over the past two and a half years, Gaza has experienced repeated outages, and experts now estimate that roughly 75% of its telecommunications infrastructure has been damaged—leaving essential services severely disrupted.

Elections and the Expansion of Control

Historically, most blackouts have occurred during moments of intense political tension. But authorities are increasingly using them as a tool to preempt dissent.

In 2024, as more than half the world’s population headed to the polls, shutdowns followed. That year alone, authorities imposed 304 internet shutdowns across 54 countries—the highest number ever recorded, surpassing the previous record set just a year earlier. The geographic spread also widened significantly, with shutdowns affecting more countries than ever before. The Comoros imposed a shutdown for the first time, while other countries, such as Mauritius, instituted broad bans on social media platforms during elections.

At least 24 countries holding elections in 2024 had a prior history of shutdowns, putting billions of people at risk of disruptions during critical democratic moments.

What stands out is not just the scale, but the normalization. Notably, the number of shutdowns in 2025 broke the record set the year prior. Whereas network disruptions were once a rare occurrence, they are now a routine measure, increasingly treated by authorities as a standard response to periods of heightened political sensitivity. 

Civil Society Fights Back

Governments use all sorts of justifications—national security, curbing the spread of disinformation, and even preventing students from cheating on exams—for internet shutdowns. But civil society is watching, and documenting, network disruptions and their impact on citizens.

In 2016, as shutdowns became an increasingly common tool of state control, Access Now launched the #KeepItOn campaign to coordinate global advocacy against network disruptions. The campaign includes a coalition composed of 345 advocacy groups (including EFF), research centers, detection networks, and others who work together to report on, and fight back against, internet shutdowns. Anyone can get involved by signing on to campaign action alerts, sharing their story, or reporting a shutdown in their jurisdiction.

Ending this harmful practice remains the goal. In 2016, the UN passed a landmark resolution supporting human rights online and condemning internet shutdowns, and UN agencies have continued to warn against the practice. But the fight to change government practices remains an uphill battle, leading civil society—and even companies—to get creative. 

During repeated shutdowns in Gaza, grassroots efforts mobilised to distribute eSIMs so Palestinians could stay connected. In 2024, EFF recognized Connecting Humanity, a Cairo-based non-profit providing eSIM access in Gaza, with its annual award for its vital work. Satellite internet such as Starlink has been supplied to people in Ukraine and Iran, though it, too, is not immune to state control. Alongside these efforts, civil society continues to share practical guidance on circumventing shutdowns and maintaining access to information.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world—and we’ll continue to fight back against internet shutdowns wherever they occur.

This is the fourth installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

Jillian C. York

Google Broke Its Promise to Me. Now ICE Has My Data.

1 week 3 days ago

In September 2024, Amandla Thomas-Johnson was a Ph.D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson's information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement. 

Google names a handful of exceptions to this promise (such as if Google receives a gag order from a court) that do not apply to Thomas-Johnson's case. While ICE “requested” that Google not notify Thomas-Johnson, the request was not enforceable or mandated by a court. Today, the Electronic Frontier Foundation sent complaints to the California and New York Attorneys General asking them to investigate Google for deceptive trade practices for breaking that promise. You can read about the complaints here. Below is Thomas-Johnson's account of his ordeal. 

Out of touch but not out of reach 

I thought my ordeal with U.S. immigration authorities was over a year ago, when I left the country, crossing into Canada at Niagara Falls.  

By that point, the Trump administration had effectively turned federal power against international students like me. After I attended a pro-Palestine protest at Cornell University—for all of five minutes—the administration’s rhetoric about cracking down on students protesting what we saw as genocide forced me into hiding for three months. Federal agents came to my home looking for me. A friend was detained at an airport in Tampa and interrogated about my whereabouts. 

I’m currently a Ph.D. student. Before that, I was a reporter. I’m a dual British and Trinadad and Tobago citizen. I have not been accused of any crime. 

I believed that once I left U.S. territory, I had also left the reach of its authorities. I was wrong. 

The email

Weeks later, in Geneva, Switzerland, I received what looked like a routine email from Google. It informed me that the company had already handed over my account data to the Department of Homeland Security. 

At first, I wasn’t alarmed. I had seen something similar before. An associate of mine, Momodou Taal, had received advance notice from Google and Facebook that his data had been requested. He was given advanced notice of the subpoenas, and law enforcement eventually withdrew them before the companies turned over his data. 

Google had already disclosed my data without telling me.

I assumed I would be given the same opportunity. But the language in my email was different. It was final: “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.” 

Google had already disclosed my data without telling me. There was no opportunity to contest it. 

Google’s broken promise

To be clear, this should not have happened this way. Google promises that it will notify users before their data is handed over in response to legal processes, including administrative subpoenas. That notice is meant to provide a chance to challenge the request. In my case, that safeguard was bypassed. My data was handed over without warning—at the request of an administration targeting students engaged in protected political speech. 

Months later, my lawyer at the Electronic Frontier Foundation obtained the subpoena itself. On paper, the request focused largely on subscriber information: IP addresses, physical address, other identifiers, and session times and durations. 

But taken together, these fragments form something far more powerful—a detailed surveillance profile. IP logs can be used to approximate location. Physical addresses show where you sleep. Session times would show when you were communicating with friends or family. Even without message content, the picture that emerges is intimate and invasive.  

State power meets private data

What this experience has made clear is that anyone can be targeted by law enforcement. And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Together, they can combine state power, corporate data, and algorithmic inference in ways that are difficult to see—and even harder to challenge. 

The consequences of what happened to me are not abstract. I left the United States. But I do not feel that I have left its reach. Being investigated by the federal government is intimidating. Questions run through your head. Am I now a marked individual? Will I face heightened scrutiny if I continue my reporting? Can I travel safely to see family in the Caribbean? 

Who, exactly, can I hold accountable?

Update: This post has been updated to include more information about Google's exceptions to their notification policy, none of which applied to the subpoena targeting Thomas-Johnson.

Guest Author

EFF to State AGs: Investigate Google's Broken Promise to Users Targeted by the Government

1 week 3 days ago
Google's Failure to Warn Users About Law Enforcement Demands for Data Is Deceptive

SAN FRANCISCO – The Electronic Frontier Foundation sent complaints today to the attorneys general of California and New York urging them to investigate Google for deceptive trade practices, related to the company's broken promise to give users prior notice before disclosing their information to law enforcement. 

The letters were sent on behalf of Amandla Thomas-Johnson, whose information was disclosed to U.S. Immigration and Customs Enforcement (ICE) without prior notice from Google. 

For nearly a decade, Google has promised billions of users that it will notify them before disclosing their personal data to law enforcement. Many times, the company has done just that. But through a hidden and systematic practice, Google has likely violated that promise numerous times over the years. This was the case for Thomas-Johnson, a Ph.D. candidate who was targeted by ICE after briefly attending a protest, effectively preventing him from contesting an invalid subpoena for his data. 

"Google should answer the question: How many other times has it broken its promise to users?” said EFF Senior Staff Attorney F. Mario Trujillo. "Advance notice is especially important now, when agencies like ICE are unconstitutionally targeting users for First Amendment-protected activity. State attorneys general should investigate Google for this deception." 

On Google’s Privacy & Terms page, it promises its users that “When we receive a request from a government agency, we send an email to the user account before disclosing information.” This promise ensures that users can protect their own privacy and decide to challenge overbroad or illegal demands on their own behalf. The company lists a handful of exceptions to this policy (such as if Google receives a gag order from a court) that do not apply to Thomas-Johnson's case. While ICE “requested” that Google not notify Thomas-Johnson, the request was not enforceable or mandated by a court. 

But on May 8, 2025, Google complied with an administrative subpoena from ICE seeking Thomas-Johnson’s subscriber information, including his name, address, IP address, and other personal identifiers. Later that same day, the company sent Thomas-Johnson a message telling him it had already complied with the subpoena, which he would have successfully challenged had he been given advance notice. Google received the subpoena in April and had more than a month to alert Thomas-Johnson. 

Communication between EFF and Google later revealed that this is a systematic issue, not an isolated one. When Google does not fulfill a subpoena within a government-provided artificial deadline, the company's outside counsel explained, Google will sometimes comply with the request and provide notice to a user on the same day. The company calls this practice “simultaneous notice.” 

"What this experience has made clear is that anyone can be targeted by law enforcement," said Thomas-Johnson. "And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Who, exactly, can I hold accountable?" 

Google must commit to ending this deception and pay for its past mistakes. The attorneys general of California and New York are empowered to stop deceptive business practices and seek financial restitution stemming from those practices. As EFF writes in its complaints, they should investigate, hold Google to its public promise to give users advanced notice of law enforcement demands, and take appropriate action if necessary.

Update: This press release has been updated to include more information about Google's exceptions to their notification policy, none of which applied to the subpoena targeting Thomas-Johnson. 
 
For the complaints:
https://www.eff.org/document/eff-letter-re-google-notice-california 
https://www.eff.org/document/eff-letter-re-google-notice-new-york 
https://www.eff.org/document/eff-letter-re-google-notice-exhibits
 
For Thomas-Johnson's account of his ordeal: https://www.eff.org/deeplinks/2026/04/google-broke-its-promise-me-now-ice-has-my-data 

For more information on lawless DHS subpoenas: https://www.eff.org/deeplinks/2026/02/open-letter-tech-companies-protect-your-users-lawless-dhs-subpoenas 

Contact: press@eff.org 

Tags: privacyfree speechanonymityDHSsubpoenafederal law enforcementGoogle
Hudson Hongo

The Dangers of California’s Legislation to Censor 3D Printing

1 week 4 days ago

California’s bill, A.B. 2047, will not only mandate censorware — software which exists to bluntly block your speech as a user — on all 3D printers; it will also criminalize the use of open-source alternatives. Repeating the mistakes of Digital Rights Management (DRM) technologies won’t make anyone safer. What it will do is hurt innovation in the state and risk a slew of new consumer harms, ranging from surveillance to platform lock-in. California must stand with creators and reject this legislation before it’s too late.

3D printing might evoke images of props from blockbuster films, rapid prototyping, medical research, or even affordable repair parts. Yet for a growing number of legislators, the perceived threat of “ghost guns” is a reason to impose restrictions on all 3D printers. Despite 3D printing of guns already being rare and banned under existing law, California may outright criminalize any user having control over their own device. 

This bill is a gift for the biggest 3D printer manufacturers looking to adopt HP’s approach to 2D printing: criminalize altering your printer’s code, lock users into your own ecosystem, and let enshittification run its course. Even worse, algorithmic print blocking will never work for its intended purpose, but it will threaten consumer choice, free expression, and privacy.

A misstep here can have serious repercussions across the whole 3D printing industry, lead the way for more bad bills, and leave California with an expensive and ineffective bureaucratic mess.

What’s in the California Proposal?

Compared to the Washington and New York laws proposed this year, California’s is the most troubling. It criminalizes open source, reduces consumer choice, and creates a bureaucratic burden.

Criminalizing Open Source and User Control

A.B. 2047 goes further than any other legislation on algorithmic print-blocking by making it a misdemeanor for the owners of these devices to disable, deactivate, or otherwise circumvent these mandated algorithms. Not only does this effectively criminalize use of any third-party, open-source 3D printer firmware, but it also enables print-blocking algorithms to parallel anti-consumer behaviors seen with DRM.

Manufacturers will be able to lock users into first-party tools, parts, and “consumables” (analogous to how 2D printer ink works). They will also be able to mandate purchases through first-party stores, imposing a heavy platform tax. Additionally, manufacturers could force regular upgrade cycles through planned obsolescence by ceasing updates to a printer’s print-blocking system, thereby taking devices out of compliance and making them illegal for consumers to resell. In short, a wide range of anti-consumer practices can be enforced, potentially resulting in criminal charges.

Independent of these deliberate harms manufacturers may inflict, DRM has shown that criminalizing code leads to more barriers to repair, more consumer waste, and far more cybersecurity risks by criminalizing research.

Less Consumer Choice

The bill favors incumbent manufacturers over newer competitors and over the interests of consumers.

Less-established manufacturers will need to dedicate considerable time and resources to implementing the ineffective solutions discussed above, navigating state approval, and potentially paying licensing fees to third-party developers of sham print-blocking software. While these burdens may be absorbed by the biggest producers of this equipment, it considerably raises the barrier to entry on a technology that can otherwise be individually built from scratch with common equipment. The result is clear: fewer options for consumers and more leverage for the biggest producers. 

Retailers will feel this pinch, but the second-hand market will feel it most acutely. Resale is an important property right for people to recoup costs and serves as an important check on inflating prices. But under this bill, such resale risks misdemeanor penalties. 

The bill locks users into a walled garden; it demands manufacturers ensure 3D printers cannot be used with third-party software tools. By creating barriers to the use of popular and need-specific alternatives, this legislation will limit the utility and accessibility of these devices across a broad spectrum of lawful uses.

Bureaucratic Burden 

A.B. 2047’s title 21.1 §3723.633-637 creates a print-blocking bureaucracy, leaning heavily on the California Department of Justice (DOJ). Initially, the DOJ must outline the technical standards for detecting and blocking firearm parts, and later certify print-blocking algorithms and maintain lists of compliant 3D printers. If a printer or software doesn’t make it through this red tape, it will be illegal to sell in the state.

The bill also requires the department to establish a database of banned blueprints that must be blocked by these algorithms. This database and printer list must be continually maintained as new printer models are released and workarounds are discovered, requiring effort from both the DOJ and printer manufacturers. 

For all the cost and burden of creating and maintaining such a database, those efforts will inevitably be outpaced by rapid iterations and workarounds by people breaking existing firearms laws.

Not just California

Once implemented, this infrastructure will be difficult to rein in, causing unintended consequences. The database meant for firearm parts can easily expand to copyright or political speech. Scans meant to be ephemeral can be collected and surveilled. This is cause for concern for everyone, as these levers of control will extend beyond the borders of the Golden State.

While California is at the forefront of print blocking, the impacts will be felt far outside of its borders. Once printer companies have the legal cover to build out anti-competitive and privacy-invasive tools, they will likely be rolled out globally. After all, it is not cost-effective to maintain two forks of software, two inventories of printers, and two distribution channels. Once California has created the infrastructure to censor prints, what else will it be used for?

As we covered in “Print Blocking Won’t Work” these print-blocking efforts are not only doomed to fail, but will render all 3D printer users vulnerable to surveillance either by forcing them into a cloud scanning solution for “on-device” results, or by chaining them to first-party software which must connect to the cloud to regularly update its print blocking system.

This law demands an unfeasible technological solution for something that is already illegal. Not only is this bad legislation with few safeguards, it risks the worst outcomes for grassroots innovation and creativity—both within the state and across the global 3D printing community.

California should reject this legislation before it’s too late, and advocates everywhere should keep an eye out for similar legislation in their states. What happens in California won't just stay in California.

Cliff Braun

EFF 🤝 HOPE: Join Us This August!

1 week 4 days ago

Protecting privacy and free speech online takes more than policy work—it takes community. Conferences like HOPE are where that community comes together to learn, connect, and push these ideals forward. That's why EFF is proud to be at HOPE 26.

Join us at this year's Hackers On Planet Earth, August 14-16 at the New Yorker Hotel in Manhattan! Get your ticket now and support our work: throughout April EFF will receive 10% of all ticket proceeds for HOPE 26. 

Grab your ticket!

See EFF at HOPE 26 in New York

While you're there, be sure to catch talks from EFF's technologists, attorneys, and activists covering a wide range of digital civil liberties topics. You can get a taste of the talks to come by watching last year's EFF presentations at HOPE_16 on YouTube:

How a Handful of Location Data Brokers Actively Tracked Millions, and How to Stop Them
In the past year, a number of investigations have revealed the outsized role of a few select companies in gathering, storing, and selling the location data of millions of devices - and by extension people - worldwide. This talk will elaborate on the technologies, data flows, and industry players which comprise this complicated ecosystem.

Ask EFF
Get an update on current EFF work, including the ongoing case against the "Department" of Government Oversight, educating the public on their digital rights, organizing communities to resist ongoing government surveillance, and more.

Systems of Dehumanization: The Digital Frontlines of the War Against Bodily Autonomy
Daly covers the bad Internet bills that made sex work more dangerous, the ongoing struggle for abortion access in America, and the persecution of trans people across all spectrums of life. These issue-spaces are deeply connected, and the digital threats they face are uniquely dangerous. Come to learn about these threat models, as well as the cross-movement strategies being built for collective liberation against an authoritarian surveillance state. 

Snag a ticket by the end of April to help support EFF's work ensuring that technology works for everyone. We hope to see you there!

Christian Romero

Hot Off the Press: EFF's Updated Guide to Tech at the US-Mexico Border

1 week 4 days ago

When people see Customs & Border Protection's giant, tethered surveillance blimp flying 20 miles outside of Marfa, Texas, lots of them confuse it with an art installation. Elsewhere along the U.S.-Mexico border, surveillance towers get mistaken for cell-phone towers. And that traffic barrel? It's actually a camera. That piece of rusted litter? That's a camera too.

Today we are publishing a major update to our zine, "Surveillance Technology at the U.S.-Mexico Border," the first since the second Trump administration began. To help people identify the machinery of homeland security, we've added more models of surveillance towers, newly deployed military tech, and a gallery of disguised trail cams and automated license plate readers.

You can get this 40-page, full-color guide through EFF's Shop or download a Creative-Commons licensed version here.

"The Battalion Search and Rescue always carries the Electronic Frontier Foundation’s zine in our desert rig," says James Holeman, who founded the humanitarian group that looks for human remains in remote parts of New Mexico and Arizona. "We’re finding new surveillance all the time, and without a resource like that, we wouldn't know what the hell we're looking at.”

The original version of the zine was distributed nearly exclusively to our allies in the borderlands—journalists, humanitarian aid workers, immigrant advocates—to help them better identify and report on the technology they discover on the ground. We only made a handful available in our online shop, and they went fast.

This time, we've printed enough for our broader EFF membership. Even if you don't live near the border, you can support our work uncovering how the U.S. Department of Homeland Security's technology threatens human rights by picking up a copy.

The zine is the culmination of a dozen trips to the border, where we hunted surveillance towers and other tech installations. We attended multiple border security conventions to collect promotional and technical materials directly from vendors. We filed public records requests, reviewed thousands of pages of docs, and analyzed satellite imagery of the entire 2,000-mile border several times over. Some of the images came from local allies, like geographer Dugan Meyer and Borderlands Relief Collective, who continue to share valuable intelligence on the changing landscape of border surveillance.

The update is available in English, with an updated Spanish version expected later this year. In the meantime, we have reprinted the original Spanish edition.

If you want to know more, a collection of EFF's broader work on border technology is available here. And if you're curious exactly where these technologies are located, you can check our ongoing map.

SUPPORT THIS WORK

Dave Maass
Checked
35 minutes 36 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed