The Dangers of California’s Legislation to Censor 3D Printing

12 hours 13 minutes ago

California’s bill, A.B. 2047, will not only mandate censorware — software which exists to bluntly block your speech as a user — on all 3D printers; it will also criminalize the use of open-source alternatives. Repeating the mistakes of Digital Rights Management (DRM) technologies won’t make anyone safer. What it will do is hurt innovation in the state and risk a slew of new consumer harms, ranging from surveillance to platform lock-in. California must stand with creators and reject this legislation before it’s too late.

3D printing might evoke images of props from blockbuster films, rapid prototyping, medical research, or even affordable repair parts. Yet for a growing number of legislators, the perceived threat of “ghost guns” is a reason to impose restrictions on all 3D printers. Despite 3D printing of guns already being rare and banned under existing law, California may outright criminalize any user having control over their own device. 

This bill is a gift for the biggest 3D printer manufacturers looking to adopt HP’s approach to 2D printing: criminalize altering your printer’s code, lock users into your own ecosystem, and let enshittification run its course. Even worse, algorithmic print blocking will never work for its intended purpose, but it will threaten consumer choice, free expression, and privacy.

A misstep here can have serious repercussions across the whole 3D printing industry, lead the way for more bad bills, and leave California with an expensive and ineffective bureaucratic mess.

What’s in the California Proposal?

Compared to the Washington and New York laws proposed this year, California’s is the most troubling. It criminalizes open source, reduces consumer choice, and creates a bureaucratic burden.

Criminalizing Open Source and User Control

A.B. 2047 goes further than any other legislation on algorithmic print-blocking by making it a misdemeanor for the owners of these devices to disable, deactivate, or otherwise circumvent these mandated algorithms. Not only does this effectively criminalize use of any third-party, open-source 3D printer firmware, but it also enables print-blocking algorithms to parallel anti-consumer behaviors seen with DRM.

Manufacturers will be able to lock users into first-party tools, parts, and “consumables” (analogous to how 2D printer ink works). They will also be able to mandate purchases through first-party stores, imposing a heavy platform tax. Additionally, manufacturers could force regular upgrade cycles through planned obsolescence by ceasing updates to a printer’s print-blocking system, thereby taking devices out of compliance and making them illegal for consumers to resell. In short, a wide range of anti-consumer practices can be enforced, potentially resulting in criminal charges.

Independent of these deliberate harms manufacturers may inflict, DRM has shown that criminalizing code leads to more barriers to repair, more consumer waste, and far more cybersecurity risks by criminalizing research.

Less Consumer Choice

The bill favors incumbent manufacturers over newer competitors and over the interests of consumers.

Less-established manufacturers will need to dedicate considerable time and resources to implementing the ineffective solutions discussed above, navigating state approval, and potentially paying licensing fees to third-party developers of sham print-blocking software. While these burdens may be absorbed by the biggest producers of this equipment, it considerably raises the barrier to entry on a technology that can otherwise be individually built from scratch with common equipment. The result is clear: fewer options for consumers and more leverage for the biggest producers. 

Retailers will feel this pinch, but the second-hand market will feel it most acutely. Resale is an important property right for people to recoup costs and serves as an important check on inflating prices. But under this bill, such resale risks misdemeanor penalties. 

The bill locks users into a walled garden; it demands manufacturers ensure 3D printers cannot be used with third-party software tools. By creating barriers to the use of popular and need-specific alternatives, this legislation will limit the utility and accessibility of these devices across a broad spectrum of lawful uses.

Bureaucratic Burden 

A.B. 2047’s title 21.1 §3723.633-637 creates a print-blocking bureaucracy, leaning heavily on the California Department of Justice (DOJ). Initially, the DOJ must outline the technical standards for detecting and blocking firearm parts, and later certify print-blocking algorithms and maintain lists of compliant 3D printers. If a printer or software doesn’t make it through this red tape, it will be illegal to sell in the state.

The bill also requires the department to establish a database of banned blueprints that must be blocked by these algorithms. This database and printer list must be continually maintained as new printer models are released and workarounds are discovered, requiring effort from both the DOJ and printer manufacturers. 

For all the cost and burden of creating and maintaining such a database, those efforts will inevitably be outpaced by rapid iterations and workarounds by people breaking existing firearms laws.

Not just California

Once implemented, this infrastructure will be difficult to rein in, causing unintended consequences. The database meant for firearm parts can easily expand to copyright or political speech. Scans meant to be ephemeral can be collected and surveilled. This is cause for concern for everyone, as these levers of control will extend beyond the borders of the Golden State.

While California is at the forefront of print blocking, the impacts will be felt far outside of its borders. Once printer companies have the legal cover to build out anti-competitive and privacy-invasive tools, they will likely be rolled out globally. After all, it is not cost-effective to maintain two forks of software, two inventories of printers, and two distribution channels. Once California has created the infrastructure to censor prints, what else will it be used for?

As we covered in “Print Blocking Won’t Work” these print-blocking efforts are not only doomed to fail, but will render all 3D printer users vulnerable to surveillance either by forcing them into a cloud scanning solution for “on-device” results, or by chaining them to first-party software which must connect to the cloud to regularly update its print blocking system.

This law demands an unfeasible technological solution for something that is already illegal. Not only is this bad legislation with few safeguards, it risks the worst outcomes for grassroots innovation and creativity—both within the state and across the global 3D printing community.

California should reject this legislation before it’s too late, and advocates everywhere should keep an eye out for similar legislation in their states. What happens in California won't just stay in California.

Cliff Braun

EFF 🤝 HOPE: Join Us This August!

14 hours 59 minutes ago

Protecting privacy and free speech online takes more than policy work—it takes community. Conferences like HOPE are where that community comes together to learn, connect, and push these ideals forward. That's why EFF is proud to be at HOPE 26.

Join us at this year's Hackers On Planet Earth, August 14-16 at the New Yorker Hotel in Manhattan! Get your ticket now and support our work: throughout April EFF will receive 10% of all ticket proceeds for HOPE 26. 

Grab your ticket!

See EFF at HOPE 26 in New York

While you're there, be sure to catch talks from EFF's technologists, attorneys, and activists covering a wide range of digital civil liberties topics. You can get a taste of the talks to come by watching last year's EFF presentations at HOPE_16 on YouTube:

How a Handful of Location Data Brokers Actively Tracked Millions, and How to Stop Them
In the past year, a number of investigations have revealed the outsized role of a few select companies in gathering, storing, and selling the location data of millions of devices - and by extension people - worldwide. This talk will elaborate on the technologies, data flows, and industry players which comprise this complicated ecosystem.

Ask EFF
Get an update on current EFF work, including the ongoing case against the "Department" of Government Oversight, educating the public on their digital rights, organizing communities to resist ongoing government surveillance, and more.

Systems of Dehumanization: The Digital Frontlines of the War Against Bodily Autonomy
Daly covers the bad Internet bills that made sex work more dangerous, the ongoing struggle for abortion access in America, and the persecution of trans people across all spectrums of life. These issue-spaces are deeply connected, and the digital threats they face are uniquely dangerous. Come to learn about these threat models, as well as the cross-movement strategies being built for collective liberation against an authoritarian surveillance state. 

Snag a ticket by the end of April to help support EFF's work ensuring that technology works for everyone. We hope to see you there!

Christian Romero

Hot Off the Press: EFF's Updated Guide to Tech at the US-Mexico Border

16 hours 45 minutes ago

When people see Customs & Border Protection's giant, tethered surveillance blimp flying 20 miles outside of Marfa, Texas, lots of them confuse it with an art installation. Elsewhere along the U.S.-Mexico border, surveillance towers get mistaken for cell-phone towers. And that traffic barrel? It's actually a camera. That piece of rusted litter? That's a camera too.

Today we are publishing a major update to our zine, "Surveillance Technology at the U.S.-Mexico Border," the first since the second Trump administration began. To help people identify the machinery of homeland security, we've added more models of surveillance towers, newly deployed military tech, and a gallery of disguised trail cams and automated license plate readers.

You can get this 40-page, full-color guide through EFF's Shop or download a Creative-Commons licensed version here.

"The Battalion Search and Rescue always carries the Electronic Frontier Foundation’s zine in our desert rig," says James Holeman, who founded the humanitarian group that looks for human remains in remote parts of New Mexico and Arizona. "We’re finding new surveillance all the time, and without a resource like that, we wouldn't know what the hell we're looking at.”

The original version of the zine was distributed nearly exclusively to our allies in the borderlands—journalists, humanitarian aid workers, immigrant advocates—to help them better identify and report on the technology they discover on the ground. We only made a handful available in our online shop, and they went fast.

This time, we've printed enough for our broader EFF membership. Even if you don't live near the border, you can support our work uncovering how the U.S. Department of Homeland Security's technology threatens human rights by picking up a copy.

The zine is the culmination of a dozen trips to the border, where we hunted surveillance towers and other tech installations. We attended multiple border security conventions to collect promotional and technical materials directly from vendors. We filed public records requests, reviewed thousands of pages of docs, and analyzed satellite imagery of the entire 2,000-mile border several times over. Some of the images came from local allies, like geographer Dugan Meyer and Borderlands Relief Collective, who continue to share valuable intelligence on the changing landscape of border surveillance.

The update is available in English, with an updated Spanish version expected later this year. In the meantime, we have reprinted the original Spanish edition.

If you want to know more, a collection of EFF's broader work on border technology is available here. And if you're curious exactly where these technologies are located, you can check our ongoing map.

SUPPORT THIS WORK

Dave Maass

Speaking Freely: Dr. Jean Linis-Dinco

17 hours 42 minutes ago

Dr. Jean Linis-Dinco is an activist-researcher working at the intersection of human rights and technology. Born in the Philippines and shaped by firsthand experience with inequality and state violence, Jean has spent her life pushing back against systems that profit from oppression. She refuses to accept a world where tech is just another tool for corporate gain. Instead, she fights for technologies and policies that put people before profit and justice before convenience. Jean earned her PhD in Cybersecurity from the University of New South Wales, Canberra, where she exposed how governments weaponized propaganda and disinformation during the Rohingya crisis in Myanmar. She currently serves as the Digital Rights Advisor for the Manushya Foundation.

David Greene: Welcome. To get started can you just introduce yourself to folks?

Jean Linis-Dinco: I'm not very good at introducing myself and I rarely do so within the context of work because I always believe that people are more than their jobs.

But first, I would like to thank you for this opportunity to share my thoughts. I've learned this kind of introduction from Kumu Vicky Holt Takamine in Hawai’i. She taught me how to introduce myself beyond titles.

So, my name is Jean, my waters are the West Philippine Sea, and I was born and raised in the land of resistance, one of the original eight provinces that revolted against Spain as they are represented by the eight rays of the sun on the Philippine flag. My ancestors fought for the freedom of the Filipino people against Spanish colonial rule, before we became subjugated once again, this time under the United States for another 48 years. The impacts of that history continue to reverberate through the domestic and international policies that ultimately pushed me out of my own country as an overseas Filipino worker.

DG: Can you tell us a bit about Manushya Foundation?

JLD: Absolutely. Manushya Foundation is a women-led organization that works with activists and human rights defenders who are targeted, who face harassment and transnational repression for their work. My work with them is on the policy and advocacy side in relation to their digital rights portfolio. It involves challenging laws and policies that criminalize freedom of expression or freedom of speech online.

It also means confronting the role of private corporations and private platforms. Because that power is rarely transparent. Big tech power is often unaccountable, as we've seen in recent years. Working in a civil society organization like Manushya, you get involved with the work on the ground and take part in grassroot-led advocacy confronting corporate abuse.

In my work, I have met people from all sorts of backgrounds. And across those encounters, I've noticed some troubling trends in some civil society organizations. There are heaps of civil society leaders who are very keen to have a seat at the table with big tech companies. It’s often hidden behind the language of ‘stakeholder engagement’. We refuse to do that at Manushya Foundation. We don’t want to be used as a rubber stamp for decisions that have already been made behind NDAs or decisions where communities most affected by these technologies were never even in the room to begin with.

I think civil society organizations should not allow themselves to be drawn into that orbit. That is very contentious in this era, because I feel like civil society bought the story that big tech could be partners in progress. We walked into their boardrooms, signing NDAs as if proximity to power meant that we were shaping it. And we've seen how in the end we're actually just giving them legitimacy. They turn our critiques and our statements to endorsements. I don't think there is any progressive form of collaboration with big tech companies that is not extractive, because the uncomfortable truth is that not everyone who wants a seat at the table is there to change what is being served.

DG: I, as someone who participates in multi-stakeholder things all the time, I completely hear that criticism. One of the things I've said is, multi-stakeholder engagement as a member of civil society takes a few forms. One, you're in the room, but you don't have a seat at the table. Two, you have a seat at the table, but you don't have a microphone. And three, they give you a microphone, but they leave the room when you talk. When we as civil society do engage, we have to be very, very intentional about ensuring it’s effective engagement. We've left many things that were “multi-stakeholder” because it was actually just NGO-washing. You know, it was only so they could say that we were sort of invited to the cocktail party afterwards.

 I've heard from you before that Manushya has a bit of a regional focus. Would you say it has a feminist focus or is it broader in terms of marginalized communities?

JLD: At its core, Manushya is a decolonial intersectional feminist organization. What that means is that we are fundamentally concerned with systems of power. In our work, we always ask who holds the power? Who is crushed by it? And who has been deliberately kept from it?

Personally, I am critical of lean-in feminism, which was popularized by a certain Meta executive. I do not agree with that kind of feminism, because it tells us women that if we just work harder, speak louder within existing power structures, we will be free. But free to do what, exactly? To participate in the same system that exploits people? The women who can afford to lean in are women who already occupy a certain class position that makes them legible to power. And most of them are white women who already have the capacity or already have a standing in society to be listened to.

I cannot lean in. Because lean-in feminism was never designed for women like me.

And then there is girl boss feminism, which I am also very, very critical of. Because more often than not, the women who call themselves girl bosses or self-made are not actually self-made. Behind every ‘self-made’ woman is a hidden economy of invisible labor. Often, they have maids. And often, those maids are Filipino women, women like my mother. Girl boss feminism is about one woman’s liberation built on another woman's bondage. I think it is absurd to call it feminism when it is basically just class warfare with better branding.

So, yes. It gets very personal.

DG: Why don’t you tell us what freedom of expression and free speech mean to you?

 JLD: Well, there is this concept of freedom of speech and freedom of expression, and it is viewed as something abstract because we cannot see speech. It is intangible. We can hear it, but we cannot see it. It's not something that we hold. It is not like food, water or housing. That is precisely the problem. Because at its core freedom of expression must be understood through material conditions.

What that means is that it dies in the structures that govern who gets heard, who gets punished, who gets killed, who is made disappeared, whose voices are treated as disposable. I would say freedom of expression must be understood as inseparable from justice because I do not believe anyone can claim to defend freedom of expression while tolerating systems that silence through fear, that silence through poverty, that silence through surveillance. Because a person working two jobs to make ends meet, a person targeted by the state, a person whose community is over-policed, I don't think they stand on equal ground with a media mogul or a political elite.

The definition of free expression must move beyond the question of whether speech is allowed. The real foundation of freedom of expression and freedom of speech is who can speak without consequences and who pays the price for doing so. It demands responsibility and it's not a shield for domination, because when speech is used to dehumanize or to incite violence or to reinforce structures of oppression, the imperialism of domination, then that participates in harm.

A serious commitment to freedom requires us to confront that harm and not hide behind languages of rights while ignoring the realities of power.

DG: How do you see that? What's the example of how that plays out, for instance in the digital rights realm now?

JLD: Well, there is, as you know—one could say it's even more evident in the United States—the “freedom of speech absolutist” as we’ve seen through Elon Musk. I don’t think he actually believes in freedom of speech at all. Because from what it appears, what he only cares about is maintaining the conditions under which people who look like him get to speak.

Speech does not exist in a vacuum. It is always in service of something.

The question is what kind of society are we actually building? I want a society where people can speak truthfully about the conditions and be heard, where dissent is not criminalized and where expression becomes a force for transformation rather than a tool for control. Free speech is a collective condition and not an individual right. It is inseparable from the question of what kind of society we are building. Because you cannot suddenly say that you are for freedom of expression while owning the platform that decides whose speech is amplified and whose is buried by an algorithm designed to serve capital. Building that society requires dismantling the structures that have always decided who gets to speak and who gets disappeared for saying the wrong thing to the wrong people.

DG: It always bothers me when I hear someone like Musk being called like a free speech absolutist, because, first of all, he’s certainly not an absolutist. I actually don't know anyone who is an absolutist. But also, I don't even think he cares about free speech that much. I think that's what we see in the US a lot now, people for whom it's not a sincere belief, but they get to speak as part of their privilege. There are also other people who think they deserve the privilege to speak because, societally, they've never been subjected to controls. When they see their community of people, who historically have been able to speak, and if it's not like that, that strikes them as the most horrible infringement on freedom of speech because it disturbs their view of privilege and who speaks. And when they see marginalized voices get silenced, it doesn't bother them because that's their norm. That's how I see it.

JLD: I'm here on a fellowship in the UK and my main study is on the American conquest of the Philippines through national language processing. And it's really interesting. I said during my talk that the United States no longer needs to use Nazi Germany as a metaphor to describe their contemporary politics. You know, American people just need to read history books not written by white men.

DG: Okay, let's dive into the age verification stuff. I think that age verification and age mandates and age regulations trying to age gate the internet are really interesting examples of the interplay between freedom of speech and a broader repression of rights. I met you at Digital Rights in Asia Pacific Assembly (DRAPAC) 2025, and I want to just give you a platform here to share your views on age verification. I was really moved by your statement at DRAPAC and what you all published on your website.

JLD: I wrote that piece at a time when Australia was pushing through that legislation. And now, we are now seeing a lot of Southeast Asian countries following that route. It always just takes one domino to fall for everyone to follow, doesn’t it?

But, what surprised me is how there’s also a lot of defeatism among some civil society organizations. I feel like they already accepted the logic of the state. There’s always this preemptive surrendering the ground on which the struggle should be taking place. And I realized the same thing is happening again.

I was on a call recently with a group of civil society organizations and someone floated the idea of supporting identity verification on social media in the Philippines as a way to counter disinformation. She came from a different understanding of the political economy, but the moment I heard it, I was disappointed. The argument is dangerous and it plays with fire because it assumes that anonymity is the problem. It assumes that the solution is to hand the state and the corporations even more power, more information, more control, and give them even more ability to track and discipline people.

I feel like this is the same trend we see with age-gating, because the claim with identity verification in the context of the Philippines, that it can be used responsibly if there are guardrails. That’s gambling with people’s lives. There has never been a single historical precedent where the state doesn't expand monitoring powers when it can once the door is open to surveillance. I don't think any guardrails will ever hold.

Civil society groups who entertain the idea of breaking anonymity to solve misinformation are rehearsing a dangerous illusion because anonymity is not a luxury. And it feels like it is being framed that way. Anonymity is a response to the political conditions where speaking freely can cost you your life. It exists because the risks are there and they are not imagined.

DG: I do think there are some people who look at age-gating from a good place. Would you say you see age verification mandates as just inevitably being tools of oppression for marginalized young people?

JLD: Above everything, it shifts the Overton window toward the broader acceptance of surveillance. In political science, when we say we're shifting the Overton window, we mean the space of political debate in public discourse is being narrowed. And now we are seeing it move towards the same old thing of, ‘if you have nothing to hide, you have nothing to worry about.’ And when you shift the Overton window towards the broader acceptance of surveillance, we're doing something very simple and very dangerous. And it turns intrusive monitoring into a normal routine of everyday life. It starts with policies that redefine surveillance as safety. Then age-gating will be established through technical infrastructure that of course can be repurposed later.

Any system capable of verifying age is also capable of verifying identity, tracking behavior, matching accounts to real people, and storing data that can be accessed by literally anyone. These policies teach people to internalize the idea that anonymity is suspicious. I think that is the most dangerous part of it--how that cultural shift is getting more and more powerful, because it moves us, the public, towards believing that only those with nothing to hide deserve rights. Then what comes next after that? Surveillance becomes a default condition for digital participation. If you cannot enter a platform without proving who you are, then surveillance becomes a prerequisite for basic communication.

Then, of course, the most powerful shift is the desensitization of younger generations to being monitored. We are raising children in a system where every login requires identity checks, they will grow into adults who assume that constant tracking is normal. Then this is what shifting the Overton window looks like in practice, because once you accept that premise, you have already surrendered the most important ground. The fight is no longer about whether surveillance should exist, but how much of it you're willing to tolerate. And we know the people who pay the price are not men in suits.

DG: Then who does pay the price?

JLD: It is always the working class children and working class families. The homeless youth who rely on social media to find food, to find a place to shower. The homeless youth who rely on social media to find community and get jobs. Then we have queer young people who are also getting locked out of spaces where they could find community. And we're locking them out of those spaces because it's ‘for their safety.’

DG: So even if there was magic tech that could solve the verification part in a completely privacy protective way, you still can't get around the infringement on the rights of young people. That seems to be the goal of the law.

JLD: Yeah, absolutely. Because why do you need to age-gate social media if it's not for control? We always frame things like this as protection under the guise of paternalism. But deep inside, we see how it is a tool to control a young population who are just now getting very politically active. And I feel like--as I'm now a geriatric millennial--people of my age and older generation have betrayed the younger generation for doing this at this precarious time, where there is a genocide happening, where there are countries being bombed. We are in a time of conflicts started by rich men, amid an ecological collapse, and our concern is children being online? Let’s not rob the children of today of their future. Age gating punishes the young for crises they did not create, whilst protecting those truly responsible from accountability.

The reality outside of social media will not go away even if kids are shut off from it. We need to confront the truth that the conditions that ruin childhood are not on social media. They are bombs, poverty, divisive politics. They're due to how we’re killing public funding and putting it through private corporations, lining the pockets of billionaires in the name of what? That is the main problem of our society, but we're not addressing that. We're just locking kids out of social media, because it's easier to do that than to address the fact that society needs an overhaul.

DG: And I think what we've seen with Australia is a lot of talk about how kids can evade the protections, whether they're using VPNs or somehow faking the ID and so all age-gating is doing is adding friction to the process. And that tends to have highly discriminatory effects also, right?

JLD: Friction might be a minor obstacle for a wealthy child with supportive parents, but friction keeps a different child off the internet. A wealthy child might have the technical means to buy a workaround to allow them to have access. There was a story in the news about an influencer family who just moved out of the country because of the age-restricted social media ban. This is the reality—people who have the means to move will move. And those who have no means to move, those who are struggling just to put food on the table—will just stay. This is anti-poor. Age gating is anti-poor.

DG: Okay, switching gears just a little bit. Was there any sort of personal experience you've had with freedom of expression that has informed how you think about the issue? Was there any kind of formative experience where you felt censored or witnessed censorship happening to someone else that really informs how you think about it now or made you care about the issue deeply?

JLD: I don't think there's one specific personal experience, per se, that has shaped how I feel about freedom and liberty in general. Growing up in the Philippines, you're forced to care, especially if you're in a working-class neighborhood like where I grew up. At an early age you realize how unfair the world is. And at first, you think that it is just unfair that the other children in my classroom families can afford a pencil case and we cannot.

It was also very difficult to fit in in the Philippines. I was labeled a troublemaker as a child. And I think some of that is actually still reminiscent of what I am today. I remember my sixth-grade history teacher approached me after reading an essay I wrote about the Philippines. She said that I should tone down my language because it will get me in trouble later in life. And I didn't understand what she meant by that. I didn't listen to her, clearly.

But that instinct stayed with me and I think it followed me through life. It followed me here—you know, the idea that you should say it, but not like that. Speak, but don't disrupt. Critique, but don't offend. And I think this is where my relationship with liberty and freedom or, specifically, freedom of expression kind of took place. It was not one defining moment, but it's in a series of small friction, as you called it. Because over time, you realize that the pressure to soften your voice never disappears. And I don't think it ever will. And I chose not to then, and I choose not to now. And there’s a lot of consequences that come with that. I don't think I will be invited to a lot of panels or keynotes. But it's a hill I'm willing to die on.

This is also the same pattern we see at a larger scale in the Philippines. You see communities speak out about land or about labor and then suddenly they are surveilled, they're either disappeared or dead. I realized quickly that freedom of expression exists on paper, but in practice it depends on who you are.

DG: Do you think there are situations where it might be appropriate for governments, or even companies, to limit freedom of expression? And if the answer is yes, what might those be?

JLD: Freedom of speech should always demand a responsibility. It has always existed within structures of power that determine whose speech is protected. So when we ask whether speech should be limited, we have to first ask. limited by whom, and in whose interest?

But I don't think the government or corporations can do that. Corporations’ end goal is always profit. And governments have historically used the language of limitation to silence the very people who dare to challenge their authority.

I believe in community-based understanding of how we actually could solve this problem, because, in the end, our relationship with our community is the core of our identity. And through those moments of interactions, we can see the freedom of speech is collective. It is always tied to building a society where people can speak truthfully, and dissent is not criminalized. It’s a matter of making sure that we understand that freedom and liberty is not an individual issue, but it’s something that affects the whole community.

DG: You’re saying this is more about community norms or our broader social compact.

JLD: When I say the community must decide, I am not offering you a utopia. I am offering you a different site of struggle. One that centers the people who have always known, in their bodies, what dehumanizing language does before it becomes dehumanizing violence. We have seen this dynamic in the way hate speech fuels violence back home in the Philippines, against indigenous communities, queer people, Muslims in Mindanao and the urban poor. Because language becomes permission that activates the system of policing and militarization already pointed at the most vulnerable. The main boundaries must be rooted in the politics of liberation, not the politics of control. Speech that punches up, that reveals injustice, that challenges power, that speech must be protected. But speech that punches down, that facilitates state violence, that dehumanizes people, I think that must be confronted, if not challenged or destroyed. We have to stop pretending that those two forms of speech are morally equivalent.

 DG: Okay, last question, one that we like to ask everyone. Who's your free speech hero? And why?

JLD: This is actually a really tough question for me because I don't actually think I have one, to be honest. I want to push back on the idea of having a single hero. Because, freedom of speech—any freedom or liberty that we have today—has never been secured by one individual alone. It has been fought for by movements. The eight-hour workday, unions, women's suffrage, despite that it was just white women who were first able to vote, and so on and so forth. It was fought for by movements, by working class people, whose names we often forget. Because a lot of movements in history, the public memory of a movement narrows it down to a single figure, often male. Movement starts from the people, because the movement would not be sustained without the drive of the working people who dedicated free, unpaid labor for it to succeed. Because without them, I don't think there would be any movement to speak of. Without them there's no platform from which any of these figures could actually emerge. 

David Greene

War as a Pretext: Gulf States Are Tightening the Screws on Speech—Again

18 hours 50 minutes ago

War does not only reshape borders. It also reshapes what can be seen, said, and remembered. 

When governments invoke “misinformation” during wartime, they often mean something simpler: speech they do not control. Since the escalation of conflict between the United States, Israel, Iran, and related spillover attacks in the Gulf, several governments have intensified efforts to silence dissent and restrict the flow of information.

Journalism under pressure

For journalists, the space to operate—already constrained in much of the Gulf—is narrowing further. Across the region, several countries (including the UAE, Qatar, and Jordan) have restricted access to conflict areas, warned of legal consequences for publishing footage, and drawn red lines around wartime reporting. These measures weaken independent coverage, elevate official narratives, and make it harder for the public to get an accurate account of events on the ground.

Reporters Without Borders has documented an intensifying crackdown on journalists across Gulf countries and Jordan, including restrictions on reporting, legal threats, and heightened risks for those who deviate from official narratives. This aligns with the broader warning from the UN that repression of civic space and freedom of expression has significantly deepened across the region during the war.

Criminalizing speech, one post at a time

For ordinary internet users, the restrictions are just as severe. Since February, hundreds of people have reportedly been arrested across the region for social media activity linked to the war. In many Gulf states, the legal infrastructure enabling this is already well-established: expansive cybercrime and media laws criminalize vaguely defined offenses such as “spreading rumors,” “undermining public order,” or “insulting the state”. In wartime, these provisions become catch-all tools: flexible enough to apply to nearly any form of dissent.

In Bahrain, authorities have reportedly cracked down on people who protested or shared footage of the conflict online. The Gulf Centre for Human Rights has reported 168 arrests in the country tied to protests and online expression, with defendants potentially facing serious prison terms if convicted.

In the UAE, authorities have arrested nearly 400 people for recording events related to the conflict and for circulating information they described as misleading or fabricated. Police have claimed this material could stir public anxiety and spread rumors, and state-linked reporting has described the crackdown as part of a broader effort to defend the country from digital misinformation.

Saudi Arabia has also intensified restrictions, issuing a statement on March 2 banning the sharing of rumors or videos of unknown origin, and issuing a campaign discouraging residents from taking or posting photos. The campaign included a hashtag that reads “photography serves the enemy.” Journalists have been prevented from documenting the aftermath of airstrikes on the country. Kuwait, Qatar, and Jordan have adopted similar restrictions on wartime imagery and reporting.

Qatar’s Interior Ministry has arrested more than 300 people for filming, circulating, or publishing what the ministry deemed to be misleading information. Taken together, these measures show how quickly wartime speech is being folded into existing legal systems designed to punish dissent.

The regional playbook

What’s striking is how consistent these measures are across different countries. As we recently wrote, governments across the broader region have enacted sweeping cybercrime and media laws over the past fifteen years, which they are now putting to use. Across different countries, the same tools are being used: existing laws, fresh bans on sharing wartime imagery, and tighter restrictions on journalists and reporting. The vocabulary changes slightly from place to place, but the logic is the same: national security, public order, rumors, and social stability are justifications for control.

This is not just a series of isolated incidents. It is a regional playbook for silencing critics and narrowing the public record. Gulf states have long relied on censorship and surveillance; the war has simply made those methods easier to justify and harder to challenge.

From “digital hopes” to digital control

As we’ve documented in our ongoing blog series, digital platforms were once seen—at least in part—as spaces that could expand public discourse in the region. But as we’ve also argued, those early “digital hopes” have given way to systems of regulation and control. 

The current crackdown is a continuation of that trajectory, not a temporary departure from it. States are not just reacting to the war; they are leveraging it to consolidate long-standing ambitions to dominate the digital public sphere.

It may be tempting to see these measures as temporary, but emergency powers—like the one enacted in Egypt following the 1981 assassination of Anwar Sadat that lasted for more than three decades—have a way of sticking around. Legal precedents that are set during wartime often become normalized—or reinvoked during times of crisis, as occurred in 2015, when France brought back a 1955 law related to the Algerian War of Independence amidst the Paris attacks.

And the stakes are high. As we’ve seen in Syria and Ukraine, regulations and platform policies can cause wartime human rights documentation to disappear. When journalists are constrained and eyewitness footage is criminalized, accountability is weakened. And when arrests become widespread, people learn to self-censor.

Protecting freedom of expression in times of conflict is a requirement for accountability, not a concession to disorder. When people can document, report, and share information freely, it becomes harder for abuses to be hidden behind official narratives. Even in wartime, the public interest is best served by defending the space to tell the truth, not by silencing speech.

Jillian C. York

We Need You: Our Privacy Cannot Afford a Clean Extension of Section 702

3 days 20 hours ago

We go through this every couple of years: Section 702 of the Foreign Intelligence Surveillance Act (FISA), which of Americans’ communications with foreign persons overseas is up for renewal. As always, Congress can reauthorize it with or without changes, or just let it expire. We know, we know, it’s a pain to have to do this every few years–but it gives us a chance to lift the hood of this behemoth tool of government surveillance and tinker with how it works. That’s why it’s so important right now to urge your Member of Congress not to pass any bill that reauthorizes Section 702 without substantial reforms.   

Take action

TELL congress: 702 Needs Reform

Section 702 is rife with problems, loopholes, and compliance issues that need fixing. The National Security Agency (NSA) collects full conversations being conducted by surveillance targets overseas and stores them, allowing the Federal Bureau of Investigation (FBI) to operate in a “finders keepers” mode of surveillance—they reason that it's already collected, so why can’t they look at those conversations? There, the FBI can query and even read the U.S. side of that communication without a warrant. The problem is, people who have been spied on by this program won’t even know and have very few ways of finding out. EFF and other civil liberties advocates have been trying for years to know when data collected through Section 702 is used as evidence against them.  

There’s simply no excuse for any Member of Congress to support a "clean" reauthorization of Section 702. Anyone who votes to do so does not take your privacy seriously. Full stop.  

The intelligence community and its defenders in Congress, as always, seem more interested in defending their rights to read your private communications than in protecting your right to privacy. It’s not really a compromise between safety and privacy if it's always your privacy that gets sacrificed. Now, we’re drawing a line in the sand: Congress cannot pass a clean extension.  

Use this EFF tool to write to your Member of Congress and tell them not to pass a clean reauthorization of Section 702.  

Take action

TELL congress: 702 Needs Reform

Matthew Guariglia

Yikes, Encryption’s Y2K Moment is Coming Years Early

4 days 12 hours ago

Google moved up its estimated deadline for quantum preparedness in cryptography to 2029—only 33 months from now. That’s earlier than previous deadlines, and they proposed the new post-quantum migration deadline because of two new papers that comprise a big jump in the state of the technology. It’s ahead of schedule, but not altogether unexpected. Cryptographers and engineers have been working on this for years, and as the deadline gets closer, it’s not surprising to see more precise timeline estimates come up.

The preparation for the Y2K bug is not a perfect analogy. Like Y2K, if systems are not updated in time, anyone with a powerful enough quantum computer will be able to more easily insert malware into the core systems of a computer and fake authentication to allow impersonation merely by observing network traffic. These are the threats whose mitigation timelines have been moved up.

But unlike Y2K, there’s a second sort of attack that we already need to be prepared for: quantum computers will be able to decrypt years of captured messages sent over encrypted messaging platforms shared any time before those platforms updated to quantum-proof encryption. That type of attack has been the main focus of engineering efforts so far and mitigation is well on its way, since anything before the upgrade might eventually be compromised.

Fortunately, not all cryptography is broken by quantum computers. Notably, symmetric encryption is quantum resistant. That means that if you have disk encryption turned on, you shouldn’t have to worry about quantum computers breaking into your phone, as long as your system’s keys are long enough. The problem is how you get the keys to do that encryption, and how you authenticate software on your device and in the cloud.

Engineers: Time to Lock In

For those whose work touches on any sort of cryptographic deployment, you’re hopefully already working on the post-quantum transition. If not, you really should be; there are quite a few relevant posts and updates with more information about what this news means for you. Your key agreement systems should be upgraded soon if they’re not already because of store-now-decrypt-later attacks. Now it’s time to prepare for authentication attacks on forged signatures as well.

In some cases, you may need to wait on others to finish their work first. If you’re using NGINX to host websites on Ubuntu, for example, the security settings you need to upgrade key agreement were just released in version 26.04. Updates are rolling out, so keep checking in and upgrade your systems as soon as you’re able to.

Users: Stay Updated, Check on Your Chats

But if you’re not in any position to be updating software or hardware, there may be some additional steps you can take to make sure you're as protected as possible. You’ll want to get the latest post-quantum protections as soon as they're available, so if you don't already have a habit of applying software updates in a timely manner, now’s a good time to start.

If you want to know if the website you’re using or the encrypted messaging app you’re chatting over will leak its data in a few years to anyone storing traffic now, you can search for its name with the word "quantum." The engineers are usually pretty proud of their work and have announced their post-quantum support (like what we’ve seen from Signal and iMessage). If you can’t find that information, you may want to have extra consideration for what you say over the internet, or switch the tools you're using. Those are the big areas to worry about now, before quantum computers are actually here, because they could result in the mass leakage of old messages.

The new deadline means that some technologies are simply not going to make it in time and will have to be left by the wayside, like trusted execution environments (TEEs), due to the slower speed of hardware deployments. TEEs are how companies do private processing on user data in the cloud, and they’re particularly relevant to AI offerings. 

Even now, though they offer more protection than processing data in the clear, TEEs are not as secure as homomorphic encryption or doing the processing on device. Post-quantum, the security level gets much closer to computation on cleartext, and even with strong user controls, that makes it way too easy to accidentally backdoor your own encrypted chats. If you’re worried about the contents of messages in an encrypted chat being exposed, you’ll probably want to completely avoid using AI features that might leak that content, such as summarization of recent chat history and notifications, and reply composition assistance. 

How’s the Transition Going So Far?

The work to update the world to post-quantum is well on its way. NIST finalized the standards for post-quantum cryptographic algorithms back in 2024. The larger platforms, websites, and hosting providers have already updated their algorithms, so even now, you’re probably already using post-quantum algorithms to access some of the internet. Measurements vary pretty widely, but up to about 4 in 10 websites currently support a post-quantum key exchange.

There’s still some work to be done in figuring out how to make the needed changes—for example, the way you find out a website’s private key to make HTTPS possible is being reworked to make room for larger signatures. Some technologies are just coming to market, like the post-quantum root of trust available now in some Chromebooks. In practice, this means that as you think about replacing your current devices in the next few years, you may want to check if you’re picking up hardware that has post-quantum support, if those specific protections are required for your threat model.

For the areas that still need updating, how much can we expect to actually get ready by the new deadline? It’s likely that not every cryptographically-capable device and deployment will be ready in time, and hardware with hard-coded certificates will probably be the last to update. We saw that happen when SHA-1 was deprecated; Point of Sale systems in particular were late adopters. While governments and large companies with quantum computers may not be interested in stealing money from cash registers, they will be interested in accessing secrets about people’s private lives. That’s why it’s so important that everyone does their part to upgrade, to protect the details of private communications and browsing. 

And there’s a good chance that older devices that won’t receive quantum-resistant updates were probably vulnerable to some other attack already. Quantum computation is just one type of attack on cryptography that’s notable for the scale of migration required, and how every public-key cryptosystem and authentication scheme has to do the work to prepare. That’s not a difference in kind, it’s a difference in scale, and some systems will inevitably be left behind.

Quantum preparedness hits different industries and services in different ways, but services that handle communications and financial information are particularly susceptible to risk, and need to act quickly to protect the privacy and security of billions of people.

Erica Portnoy

Comparison Shopping Is Not a (Computer) Crime

4 days 16 hours ago

As long as people have had more than one purchasing option, they’ve been comparing those options and looking for bargains. Online shoppers are no exception; in fact, one of the potential benefits of the internet is that it expands our options for everything from car rentals to airline tickets to dish soap. New AI tools can make the process even easier. These tools could provide some welcome relief for consumers facing sky-high prices that many cannot afford.

Unfortunately, Amazon is trying to block these helpful new tools, which can steer shoppers towards competitors. Taking a page from Facebook and RyanAir, they are trying to use computer crime laws to do it. 

Amazon’s target is Perplexity, which makes an AI-enabled web browser, called Comet, that allows users to browse the web as they normally would, but can also perform certain actions on the user’s behalf. For example, a user could ask Comet to find the best price on a 24-pack of toilet paper, and if satisfied with the results, have the browser order it. Amazon claims that Perplexity violated the Computer Fraud and Abuse Act (CFAA) by building a tool that helps users access information on Amazon and engage with the site.

Unfortunately, a federal district court agreed. The court’s fundamental mistake: relying on the Ninth Circuit’s misguided decision in Facebook v Power Ventures, rather than the court’s much better and more applicable reasoning in hiQ Labs.

Perplexity has appealed to the Ninth Circuit. As we explain in an amicus brief filed in support, the district court’s mistake, if affirmed, could lead to myriad unintended consequences. Overbroad readings of the CFAA have undermined research, security, competition, and innovation. For years, we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Amazon’s claims here, not least because most of Amazon’s website is publicly available.

The court’s approach would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may create separate accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, it can’t just ban the researchers from using the site; it can render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

A broad reading of CFAA in this case would also undermine competition by enabling companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

The Ninth Circuit should follow Van Buren’s lead and interpret the CFAA narrowly, as Congress intended. Website owners do not need new shields against independent accountability.

Related Cases: Facebook v. Power Ventures
Corynne McSherry

EFF is Leaving X

4 days 17 hours ago

After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.

The Numbers Aren’t Working Out

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. 

We Expected More

When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing

We called for: 

  • Transparent content moderation: Publicly shared policies, clear appeals processes, and renewed commitment to the Santa Clara Principles
  • Real security improvements: Including genuine end-to-end encryption for direct messages
  • Greater user control: Giving users and third-party developers the means to control the user experience through filters and interoperability.

Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them. 

"But You're Still on Facebook and TikTok?" 

Yes. And we understand why that looks contradictory. Let us explain. 

EFF exists to protect people’s digital rights. Not just the people who already value our work, have opted out of surveillance, or have already migrated to the fediverse. The people who need us most are often the ones most embedded in the walled gardens of the mainstream platforms and subjected to their corporate surveillance. 

Young people, people of color, queer folks, activists, and organizers use Instagram, TikTok, and Facebook every day. These platforms host mutual aid networks and serve as hubs for political organizing, cultural expression, and community care. Just deleting the apps isn't always a realistic or accessible option, and neither is pushing every user to the fediverse when there are circumstances like:

  • You own a small business that depends on Instagram for customers.
  • Your abortion fund uses TikTok to spread crucial information.
  • You're isolated and rely on online spaces to connect with your community.

Our presence on Facebook, Instagram, YouTube, and TikTok is not an endorsement. We've spent years exposing how these platforms suppress marginalized voices, enable invasive behavioral advertising, and flag posts about abortion as dangerous. We’ve also taken action in court, in legislatures, and through direct engagement with their staff to push them to change poor policies and practices.

We stay because the people on those platforms deserve access to information, too. We stay because some of our most-read posts are the ones criticizing the very platform we're posting on. We stay because the fewer steps between you and the resources you need to protect yourself, the better. 

We'll Keep Fighting. Just Not on X

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members’ support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we’re here to help you take back control.

Kenyatta Thomas

Banning New Foreign Routers Mistargets Products to Fix Real Problem

5 days 14 hours ago

On March 23, the FCC issued an update to their Covered List, a list of equipment banned from obtaining regulatory approval necessary for U.S. sale (and thus effectively a ban on sale of new devices), to include all new routers produced in foreign countries unless they are specifically given an exception by the Department of Defense (DoD) or DHS. The Commission cited “security gaps in foreign-made routers” leading to widespread cyberattacks as justification for the ban, mentioning the high-profile attacks by Chinese advanced persistent threat actors Volt, Flax, and Salt Typhoon. Although the stated intention is to stem the very real threat of domestic residential routers being commandeered to initiate attacks and act as residential proxies, this sweeping move serves as a blunt instrument that will impact many harmless products. In addition to being far too broad, it won’t even affect many vulnerable devices that are most active in these types of attacks: IoT and connected smart home devices.

Previously, the FCC had changed the Covered List to ban hardware by specific vendors, such as telecom equipment produced by companies Huawei and Hytera in 2021. This new blanket ban, in contrast, affects the importation and sale of almost all new consumer routers. It does not affect consumer routers produced in the United States, like Starlink in Texas. While some of the affected routers will be vulnerable to compromises that hijack the devices and use them for cybercrime and attacks, this ban does not distinguish between companies with a track-record of producing vulnerable products and those without. As a result, instead of incentivizing security-minded production, this will only limit the options consumers have to US-based manufacturers not affected by the ban—even those that lack stellar security reputations themselves.

While the sale of vulnerable routers in the U.S. will not stop, the announcement quoted an Executive Branch determination that foreign produced routers introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense.” Yet this move does nothing to address the growing number of connected devices involved in the attacks this ban aims to address. As we have previously pointed out, supply chain attacks have resulted in no-name Android TV boxes preloaded with malware, sold by retail giants like Amazon, fuelling the massive Kimwolf and BADBOX 2 fraud and residential proxy botnets. Banning the specific models and manufacturers we know produce dangerous devices putting its purchasers at risk, rather than issuing blanket bans punishing reputable brands that do better, should be the priority.

With the FCCs top commissioner appointed by the President, this ban comes as other parts of the administration impose tariffs and issue dozens of trade-related executive orders aimed at foreign goods. A few larger companies with pockets deep enough to invest in manufacturing plants within the U.S. may see this as an opportune moment, while others not as well poised to begin U.S. operations may attempt to curry enough favor to be added to the DoD or DHS exception lists. At best, this will result in the immediate effect of an ill-targeted policy that does little to improve domestic cybersecurity posture. At worst, it entrenches existing players and deepens problematic quid-pro-quo arrangements.

American consumers deserve better. They deserve the assurance that the devices they use, whether routers or other connected smart home devices, are built to withstand attacks that put themselves and others at risk, no matter where they are manufactured. For this, a nuanced, careful consideration of products (such as was part of the FCC’s 2023-proposed U.S. Cyber Trust Mark) is necessary, rather than blanket bans.

Bill Budington

Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law

5 days 16 hours ago

Another court has ruled that copyright can’t be used to keep our laws behind a paywall. The U.S. Court of Appeals for the Third Circuit upheld a lower court’s ruling that it is fair use to copy and disseminate building codes that have been incorporated into federal and state law, even though those codes are developed by private parties who claim copyright in them. The court followed the suggestions EFF and others presented in an amicus brief, and joined a growing list of courts that have placed public access to the law over private copyright holders’ desire for control.

UpCodes created a database of building codes—like the National Electrical Code—that includes codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law, and therefore has the right to control how the public accesses and shares them. Fortunately, neither the Constitution nor the Copyright Act support that theory. Faced with similar claims, some courts, including the Fifth Circuit Court of Appeals, have held that the codes lose copyright protection when they are incorporated into law. Others, like the D.C. Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that, whether or not the legal status of the standards changes once they are incorporated into law, making them fully accessible and usable online is a lawful fair use.

In this case, the Third Circuit found that UpCodes’s copying of the codes was a fair use, in a decision closely following the D.C. Circuit’s reasoning. Fair use turns on four factors listed in the Copyright Act, and the court found that all four favored UpCodes to some degree.

On the first factor, the purpose and character of the use, the court found that UpCodes’s use was “transformative” because it had a separate and distinct purpose from ASTM—informing people about the law, rather than just best practices in the building industry. No matter that UpCodes was copying and disseminating entire safety codes verbatim—using the codes for a different purpose was enough. And UpCodes being a commercial venture didn’t change the outcome either, because UpCodes wasn’t charging for access to the codes.

On the second factor, the nature of the copyrighted work, the Third Circuit joined other appeals courts in finding that laws are facts, and stand at “the periphery of copyright’s core protection.” And this included codes that were “indirectly” incorporated—meaning that they were incorporated into other codes that were themselves incorporated into law.

The third factor looks at the amount and substantiality of the material used. The court said that UpCodes could not have accomplished its purpose—providing access to the current binding laws governing building construction—without copying entire codes, so the copying was justified. Importantly, the court noted that UpCodes was justified in copying optional parts of the codes as well as “mandatory” sections because both help people understand what the law is.

Finally, the fourth factor looks at potential harm to the market for the original work, balanced against the public interest in allowing the challenged use. The court rejected an argument frequently raised by copyright holders—that harm can be assumed any time materials are posted to the internet for all to access. Instead, the court held that when a use is transformative, a rightsholder has to bring evidence of harm, and that harm will be balanced against the public benefit. Because “enhanced public access to the law is a clear and significant public benefit,” and ASTM hadn’t shown significant evidence that UpCodes had meaningfully reduced ASTM’s revenues, the fourth factor was at least neutral. It didn’t matter to the court that ASTM offered to provide copies of legally binding standards to the public on request, because “the mere possibility of obtaining a free technical standard does not nullify the public benefits associated with enhanced access to law.”

This is a good result that will expand the public’s access to the laws that bind us—something that’s more important than ever given recent assaults on the rule of law. In the future, we hope that courts will recognize that codes and standards lose copyright when they are incorporated into law, so that people don’t have to spend years and legal fees litigating fair use just to exercise their rights.

Mitch Stoltz

👁 Selling Mass Surveillance | EFFector 38.7

5 days 17 hours ago

Time and time again, we've seen police surveillance suffer from 'mission creep'—technology sold as a way to prevent heinous crimes ends up enforcing traffic violations, tracking protestors, and more. In our latest EFFector newsletter, we're diving into this troubling pattern and sharing all the latest in the fight for privacy and free speech online.

JOIN OUR NEWSLETTER

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This week's issue covers the urgent need to reform NSA spying; a victory for internet access in the Supreme Court; and how license plate readers are normalizing mass surveillance.

Prefer to listen in? EFFector is now available on all major podcast platforms. This time, we're chatting with EFF Privacy Litigation Director Adam Schwartz about some of the recent technologies we've seen suffer from "mission creep." And don't miss the EFFector news quiz! You can find the episode and subscribe on your podcast platform of choice

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F2ff7f80b-1fbe-4013-97b6-43873a6785ac%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

Want to help us push back against mass surveillance? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight for privacy and free speech online when you support EFF today!

Christian Romero

Digital Hopes, Real Power: How the Arab Spring Fueled a Global Surveillance Boom

6 days 1 hour ago

This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the first post here, and the second here.

When people recall the 2011 uprisings across the Middle East and North Africa (MENA), they often picture crowded squares, raised phones, and the feeling that the internet had finally shifted the balance of power toward ordinary people. But the past decade and a half is also a story about how governments, companies, and platforms turned those same tools into the backbone of a powerful state surveillance apparatus.

For activists, journalists, everyday users, that means now living with a constant threat. The phone in your pocket, the platforms you organize on, and the systems you rely on for safety and connection can be weaponized at the flip of a switch. A global surveillance industry has treated repression by many MENA governments as a growth opportunity, and the tactics refined there now shape digital authoritarianism worldwide. This essay traces how that shift unfolded: security agencies upgraded older systems of repression with new surveillance tools and permanent monitoring infrastructure; cybercrime laws and mercenary spyware markets turned digital control into standard operating procedure; and biometrics, facial recognition, and ‘smart city’ projects laid the groundwork for AI‑driven surveillance that now shapes protests, borders, and everyday life far beyond the region. 

Remembering the Arab Spring means seeing the events of 2011 as both a remarkable moment of movement history when people leveraged networked tools in their fight for freedom and the beginning of a long, grinding effort to turn those same tools into mechanisms of state control.

Old‑School Repression, New‑School Tools

Long before Facebook and Twitter, regimes in countries like Egypt and Syria already knew how to crush dissent. They leaned on informant networks, physical surveillance, and wiretaps, backed by emergency laws that let security agencies monitor and detain critics with almost no restraint. Research on the use of surveillance technology in MENA shows that, even before the Arab Spring, states were layering early digital tools like internet monitoring, deep packet inspection, and interception centers on top of that older machinery of control.

At the same time, connectivity was racing ahead. Cheap smartphones and social media suddenly let people share information at scale, coordinate protests, and broadcast abuses in real time. In 2011, EFF described both the excitement around “Facebook revolutions” and the early signs that governments were scrambling to upgrade their capacity to watch and disorganize popular dissent.

After the uprisings, Western critics endlessly debated how much credit to give social media itself. While in the background, security agencies across several MENA states reached a much simpler conclusion: if networked communication can help topple a dictator, then they needed to embed themselves deep inside those networks. Analyses of the rise of digital authoritarianism in MENA show how quickly officials pivoted from being surprised by online organizing to building systems to monitor and pre‑empt it.

In the years after 2011, governments across the region poured money into tools that let them systematically watch what people said and did on major platforms. Foreign vendors set up monitoring centers and interception systems that let security agencies block tens of thousands of sites, scrape and analyze social media at scale, monitor activist pages and online communities, and track activists in real time. They built a new, pre‑emptive model of digital control, one that assumes the state should see as much as possible, as early as possible.

As we noted in 2011, exporting permanent surveillance infrastructure to already‑abusive governments doesn’t “modernize” public safety; it locks in an architecture of control that is primed to abuse dissidents, journalists, and marginalized communities.

Domestic Lawfare and Cyber-Mercenaries

After the uprisings, a number of governments also rewrote the rules that govern online life. Cybercrime laws, “fake news” provisions, and overbroad public‑order and ‘morality’ offences gave prosecutors and security agencies legal cover to act with impunity. Governments in Saudi Arabia, Tunisia, Jordan, and Egypt combined counterterrorism, cybercrime, defamation, and protest laws into a legal thicket designed to make online dissent feel dangerous and costly. Morality laws and cybercrime provisions are used to target queer and trans people based on identity and expression.​

At the United Nations, a new global cybercrime convention now risks baking this logic into international law. The convention was adopted by the UN General Assembly in late 2024, despite serious human rights concerns raised by civil society. Echoing our partners, EFF warned at the time that the UN cybercrime draft convention remained too flawed to adopt and urged states to reject the draft language because it legitimized expansive surveillance powers and criminalized legitimate expression, security research, and everyday digital practices around the world. While on paper, these instruments gesture to “public safety” objectives, in practice they function as pathways for state security agencies to monitor, prosecute, and silence the communities most at risk. For state-targeted communities, that makes being visible online a calculated risk, not a neutral choice.​​

Criminal codes are only half the story; mercenary tech is the other. As governments worldwide looked for ways to outpace their critics, a parallel market emerged to help them infiltrate and take over devices. Companies like NSO Group marketed Pegasus and similar tools as off‑the‑shelf capabilities for governments that wanted to hack a target’s cellphones or other devices to read messages, turn on microphones, and monitor entire social networks while bypassing the courts. 

In 2019, UN Special Rapporteur David Kaye called for a global moratorium on the sale and transfer of private surveillance tools until real, enforceable safeguards exist. Two years later, forensic work by Amnesty and media partners showed how the same spyware used to hack phones of Palestinian human‑rights defenders was used to surveil journalists, activists, lawyers, and political opponents across dozens of countries

Regional groups responded by demanding an end to the sale of surveillance technology to autocratic governments and security agencies, arguing that you cannot keep selling “lawful intercept” tools into systems where law itself is an instrument of repression. Commercial spyware is at the center of digital repression, not at its margins. Surveillance vendors are not neutral suppliers. Safeguards remain weak, fragmented, or nonexistent in most of the countries buying these tools, yet vendors continue seeking new contracts and new militarized “use cases.” Put bluntly, the companies that design, market, and maintain these systems precisely because they enable this kind of control profit from (and help entrench) authoritarian power.

Biometrics, Facial Recognition, and AI‑Powered Surveillance Cities

On top of this rapidly intensifying interception and spyware stack, governments and companies began layering biometrics and face recognition into everyday systems, creating pathways for bulk data collection, automated analysis, and risk profiling. In parts of MENA, national ID schemes, border and migration controls, and centralized biometric databases have been rolled out in environments with weak or captured data‑protection laws, making it easy to link people’s movements, services, and political activity to a single, persistent identifier.​

Humanitarian programs are not exempt from this protocol. In Jordan, Syrian refugees have been required to submit iris scans and biometric data to access cash assistance and food, turning “consent” into a precondition for survival. When access to aid depends on enrollment in centralized biometric systems, any breach, misuse, or repurposing of that data can have severe, life‑altering consequences for people who have no realistic way to opt out. Investigations into surveillance‑tech firms complicit in abuses in MENA show that vendors profit from supplying biometric and surveillance tools for migration management and internal security, even when those tools are used in discriminatory or abusive ways.​

Like elsewhere, mass surveillance technologies in MENA were first piloted on people who were already criminalized or made vulnerable by poverty. But their use quickly expanded from narrow, security‑framed deployments to routine use in city streets. As hardware sensors, cameras, and data storage got cheaper, “smart city” surveillance systems promised seamless security and services, and it became easier and less politically contentious to keep these systems running everywhere, all the time.​

Unlike targeted hacking tools, these broad, city‑wide surveillance infrastructures erase any practical line between people under investigation and the broad public, normalizing bulk, indiscriminate monitoring of public space and everyday movement. In the Gulf, facial recognition and dense sensor networks are increasingly built into high‑profile “smart city” and mega‑project plans that lean heavily on biometric and AI‑driven monitoring. These are security‑first development projects where biometric and sensor infrastructures are designed from the outset to embed policing, migration control, and commercial tracking into the urban fabric. In this vision of the Gulf’s “smart city” future—often sold as seamless services and digital opportunity—“smart” is the branding, and pervasive monitoring is the operating principle.​​

EFF has consistently opposed government use of face recognition and biometric surveillance, in some instances calling for outright bans. In contexts that treat peaceful dissent as a security threat, embedding biometric surveillance into everyday infrastructure locks in a balance of power that favors militarized policing and state control. That infrastructure is now the starting point for a new set of risks. Surveillance systems built over the last decade are being repackaged as the foundation for a new generation of “AI‑enabled” defense and security products. 

Companies that once focused on video management or perimeter security now advertise “defense applications” for AI‑driven situational awareness and threat detection, using computer‑vision models to scan camera feeds, compare against existing watchlists, and flag “suspicious” people or behaviors in real time. Drone and sensor platforms are being upgraded with embedded AI that tracks and classifies targets autonomously and with “drone‑based AI threat detection and intelligent situational awareness,” turning aerial surveillance into a continuous data feed for security agencies and militaries. In smart‑city and defense expos from the Gulf to Europe and North America, similar systems are marketed as neutral efficiency upgrades or tools to “protect critical infrastructure,” even where they are explicitly designed to scale up border enforcement, protest surveillance, and internal security operations.

As these systems are folded into AI‑driven defense products, the line between “civilian” infrastructure and militarized surveillance disappears, turning streets, borders, and aid sites into continuous input for security operations. That is the landscape that human rights and accountability efforts now have to confront.

Templates of Control, Networks of Resistance

The patterns established in heavily securitized MENA states after the Arab Spring now shape how states monitor and crush more recent uprisings, from Iran’s use of location data and facial recognition to track down protesters to long‑running crackdowns elsewhere in the region. This model of “digital authoritarianism” built on spyware, data‑hungry ID systems, platform control, and emergency‑style security laws has emerged everywhere from Latin America to Eastern Europe to here in the United States. As the new UN Cybercrime Convention moves toward implementation, its broad offences and surveillance powers risk turning this ad hoc toolkit into a formal template for cross‑border data‑sharing, repression, and an all‑purpose global surveillance instrument.

For people on the ground, none of this is theoretical. Human‑rights defenders, journalists, and ordinary users across the region face arrest, long prison sentences, and exile based on their digital traces. In that context, commercial spyware is not a marginal issue but part of the core machinery of repression. Pegasus has been used to hack journalists’ phones through zero‑click exploits and compromise human‑rights defenders and watchdog organizations themselves, including staff at Amnesty’s Pegasus Project partners and Human Rights Watch. These deployments give practical effect to the “cybercrime” and “terrorism” frameworks described earlier: person‑by‑person campaigns against particular communities, contacts, and networks, rather than “neutral,” generalized security measures.

Under these conditions, everyday security becomes a second job. People describe carrying multiple phones, keeping one for relatively “clean” uses and others for riskier conversations, splitting identities across platforms, using coded language, and moving their organizing off mainstream services when possible. Pushing this burden onto users is a political choice: states, platforms, and vendors could build systems that are safe by design; instead, they externalize risk to the people they watch and punish.

Even against that backdrop, civil society organizations have refused to capitulate to security agencies and vendors. Regional coalitions have demanded strict export controls and outright bans on selling intrusive surveillance tech to autocratic governments. Advocates have also pushed companies to do more than box‑ticking “due diligence.” Work with surveillance‑tech firms in the context of migration and border control has repeatedly shown that most are still far from serious human‑rights assessments, let alone willing to turn down these lucrative contracts.

Many of the same governments that have been critical of others on the issue of human rights have hosted or licensed companies that build these tools, in some cases buying similar capabilities for their own security agencies. European authorities, for instance, have investigated FinFisher’s export of spyware “made in Germany” to Turkey and other non‑EU governments. Meanwhile, the NSO Group has at least 22 Pegasus contracts with security and law‑enforcement agencies in 12 EU countries. This is a transnational industry, not a localized problem.

Against near impossible odds, people continue finding pathways to freedom. The global surveillance sector reinforces the same hierarchies and violence that people have found ways to survive for generations. Queer activists and others at the sharpest edges of this system have had to develop their own forms of resistance, including against biometric and data‑driven targeting. Encryption, circumvention tools, and security training are not silver bullets, but they remain essential for anyone trying to organize, document abuses, or simply exist online with a bit less risk. Resources like EFF’s Surveillance Self‑Defense are one piece of that ecosystem, alongside trainers and groups who have been doing this work on the ground for years.​

Defending the Future of Digital Dissent

The Arab Spring is often remembered through images of packed squares and hopeful tweets. But contending with its aftermath means confronting the surveillance architecture built in its shadow: laws that turn online speech into a crime, spyware and biometric systems that turn phones and faces into tracking beacons, and platform practices that routinely sacrifice the people most at risk. None of that is inevitable, and none of it is confined to one part of the world.

Accountability has to reach both governments and the companies that profit from arming them with these tools. That means pushing for far stronger limits on how surveillance tech is built, sold, and deployed; demanding meaningful transparency when these systems are used; and defending the tools people rely on to communicate and organize safely, including robust encryption and secure channels. It also means taking direction from the people and communities who have been navigating and resisting this landscape for years.

Surveillance itself is transnational: tools, playbooks, and data moves across borders as easily as money. And so we, too, continue our work, documenting abuses, sharing security knowledge, and collectively organizing against these violent systems.

This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

Sarah Hamid

EU Parliament Blocks Mass-Scanning of Our Chats—What's Next?

6 days 14 hours ago

The EU’s so-called Chat Control plan, which would mandate mass scanning and other encryption breaking measures, has had some good news lately. The most controversial idea, the forced requirement to scan encrypted messages, was given up by EU member states. And now, another win for privacy: the EU Parliament has dealt a real blow to voluntary mass-scanning of chats by voting to not prolong an interim derogation from e-Privacy rules in the EU. These rules allowed service providers, temporarily, to scan private communication.  

But no one should celebrate just yet. We said there is more to it, and voluntary scanning is a key part. Unlike in the U.S., where there is no comprehensive federal privacy law, the general and indiscriminate scanning of people’s messages is not legal in the EU without a specific legal basis. The e-Privacy derogation law, which gave (limited) cover for such activities, has now expired. Does that mean mass scanning will stop overnight?  

Not really. 

Companies have continued similar scanning practices during past gaps. Google, Meta, Microsoft, and Snap have already signaled in a joint statement to “continue to take voluntary action on our relevant Interpersonal Communication Services.” Whether this indicates continued scanning of our private communication is not entirely clear, but what is clear is that such activity would now risk breaching EU law. Then again, lack of compliance with EU data protection and privacy rules is nothing new for big tech in Europe. 

Most importantly, the “Chat Control” proposal for mandatory detection of child abuse material (CSAM) is still alive and being negotiated. It has shifted the focus toward so-called risk mitigation measures, such as problematic age verification and voluntary activities. If platforms are expected to adopt these as part of their compliance, they risk no longer being truly voluntary. While mass scanning may be gone on paper, some broader concerns remain.  

So, where does this leave us? The immediate priority is to make sure the expired exception for mass scanning is not revived. At the same time, lawmakers need to pull the teeth from the currently negotiated Chat Control proposal by narrowing risk mitigation measures. This means ensuring that age verification does not become a default requirement and “voluntary activities” are not turned into an expectation to scan our communications.   

As we said before, this is a zombie proposal. It keeps coming back and must not be allowed to return through the back door. 

Christoph Schmon

Triple Header for Privacy’s Defender in New York

1 week 3 days ago

You’re invited on a journey inside the privacy battles that shaped the internet. EFF’s Executive Director Cindy Cohn has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet.

Join Cindy at three events in New York discussing her bestselling new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, on sale now. All proceeds from the book benefit EFF. Find the full event details below, and RSVP to let us know if you can make it.

April 20 - With Women in Security and Privacy (WISP)

Join Women in Security and Privacy (WISP) and EFF for a conversation featuring American University Senior Professorial Lecturer Chelsea Horne and EFF Executive Director Cindy Cohn as they dive into data security, Federal access to data, and your digital rights.


Privacy's Defender with WISP
Kennedys
22 Vanderbilt Avenue, Suite 2400, New York, NY 10017
Monday, April 20, 2026
6:00 pm to 8:00 pm
REGISTER NOW


April 21 - With Julie Samuels at Civic Hall

Join Tech:NYC President and CEO Julie Samuels, in conversation with EFF Executive Director Cindy Cohn for a discussion about Cindy's work, her new book, and what we're all wondering: Can have private conversations if we live our lives online?


Privacy's Defender at Civic Hall
Civic Hall
124 E 14th St, New York, NY 10003
Tuesday, April 21, 2026
6:00 pm to 9:00 pm
REGISTER NOW


April 23 - With Anil Dash at Brooklyn Public Library

Join antitech Principal & Cofounder Anil Dash, in conversation with EFF Executive Director Cindy Cohn to discuss Cindy's new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance.


Privacy's Defender at Brooklyn Public Library
Brooklyn Public Library - Central Library, Info Commons Lab
10 Grand Army Plz 1st floor, Brooklyn, NY 11238
Thursday, April 23, 2026
6:00 pm to 7:30 pm
REGISTER NOW


"Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions."
~Edward Snowden, whistleblower; author of Permanent Record

Can't make it? Look for Cindy at a city (or web connection) near you! Find the latest tour dates on the Privacy’s Defender hub or follow EFF for more.

Part memoir and part legal history for the general reader, Privacy’s Defender is a compelling testament to just how much privacy and free expression matter in our efforts to combat authoritarianism, grow democracy, and strengthen human rights. Thank you for being a part of that fight.

Want to support the cause and get a copy of the new book? New or renewing EFF members can preorder one as their annual gift!

Aaron Jue

The FAA’s “Temporary” Flight Restriction for Drones is a Blatant Attempt to Criminalize Filming ICE

1 week 3 days ago

Legal intern Raj Gambhir was the principal author of this post.

The Trump administration has restricted the First Amendment right to record law enforcement by issuing an unprecedented nationwide flight restriction preventing private drone operators, including professional and citizen journalists, from flying drones within half a mile of any ICE or CBP vehicle.

In January, EFF and media organizations including The New York Times and The Washington Post responded to this blatant infringement of the First Amendment by demanding that the FAA lift this flight restriction. Over two months later, we’re still waiting for the FAA to respond to our letter.

The First Amendment guarantees the right to record law enforcement. As we have seen with the extrajudicial killings of George Floyd, Renée Good, and Alex Pretti, capturing law enforcement on camera can drive accountability and raise awareness of police misconduct.

A 21-Month Long “Temporary” Flight Restriction?

The FAA regularly issues temporary flight restrictions (TFRs) to prevent people from flying into designated airspace. TFRs are usually issued during natural disasters, or to protect major sporting events and government officials like the president, and in most cases last mere hours.

Not so with the restriction numbered FDC 6/4375, which started on January 16, 2026. This TFR lasts for 21 months—until October 29, 2027—and covers the entire nation. It prevents any person from flying any unmanned aircraft (i.e., a drone) within 3000 feet, measured horizontally, of any of the “facilities and mobile assets,” including “ground vehicle convoys and their associated escorts,” of the Departments of Defense, Energy, Justice, and Homeland Security. Violators can be subject to criminal and civil penalties, and risk having their drones seized or destroyed.

In practical terms, this TFR means that anyone flying their drone within a half mile of an ICE or CBP agent’s car (a DHS “mobile asset”) is liable to face criminal charges and have their drone shot down. The practical unfairness of this TFR is underscored by the fact that immigration agents often use unmarked rental cars, use cars without license plates, or switch the license plates of their cars to carry out their operations. Nor do they provide prior warning of those operations.

The TFR is an Unconstitutional Infringement of Free Speech

While the FAA asserts that the TFR is grounded in its lawful authority, the flight restriction not only violates multiple constitutional rights, but also the agency’s own regulations.

First Amendment violation. As we highlighted in the letter, nearly every federal appeals court has recognized the First Amendment right of Americans to record law enforcement officers performing their official duties. By subjecting drone operators to criminal and civil penalties, along with the potential destruction or seizure of their drone, the TFR punishes—without the required justifications—lawful recording of law enforcement officers, including immigration agents.  

Fifth Amendment violation. The Fifth Amendment guarantees the right to due process, which includes being given fair notice before being deprived of liberty or property by the government. Under the flight restriction, advanced notice isn’t even possible. As discussed above, drone operators can’t know whether they are within 3000 horizontal feet of unmarked DHS vehicles. Yet the TFR allows the government to capture or even shoot down a drone if it flies within the TFR radius, and to impose criminal and civil penalties on the operator.

Violations of FAA regulations. In issuing a TFR, the FAA’s own regulations require the agency to “specify[] the hazard or condition requiring” the restriction. Furthermore, the FAA must provide accredited news representatives with a point of contact to obtain permission to fly drones within the restricted area. The FAA has satisfied neither of these requirements in issuing its nationwide ban on drones getting near government vehicles.

EFF Demands Rescission of the TFR

We don’t believe it’s a coincidence that the TFR was put in place in January 2026, at the height of the Minneapolis anti-ICE protests, shortly after the killing of Renée Good and shortly before the shooting of Alex Pretti. After both of those tragedies, civilian recordings played a vital role in contradicting the government’s false account of the events.

By punishing civilians for recording federal law enforcement officers, the TFR helps to shield ICE and other immigration agents from scrutiny and accountability. It also discourages the exercise of a key First Amendment right. EFF has long advocated for the right to record the police, and exercising that right today is more important than ever.

Finally, while recording law enforcement is protected by the First Amendment, be aware that officers may retaliate against you for exercising this right. Please refer to our guidance on safely recording law enforcement activities.

Update: The Reporters Committee for Freedom of the Press (RCFP) has filed a petition for review in the D.C. Circuit (Levine v. FAA).

Sophia Cope

Tech Nonprofits to Feds: Don’t Weaponize Procurement to Undermine AI Trust and Safety

1 week 3 days ago

While the very public fight continues between the Department of Defense and Anthropic over whether the government can punish a company for refusing to allow its technology to be used for mass surveillance, another agency of the U.S. government is quietly working to ensure that this dispute will never happen again. How? By rewriting government procurement rules.

Using procurement — meaning, the processes by which governments acquire goods and services  to accomplish policy goals is a time-honored and often appropriate strategy. The government literally expresses its politics and priorities by deciding where and how it spends its money. To that end, governments can and should give our tax dollars to companies and projects that serve the public interest, such as open-source software development, interoperability, or right to repair. And they should withhold those dollars from those that don’t, like shady contractors with inadequate security systems.

New proposed rules for the principal agency in charge of acquiring goods, property, and services for the federal government, the General Services Administration (GSA), are supposed to be primarily an effort to implement one policy priority: promoting “ideologically neutral” American AI innovation. But the new guidelines do far more than that.

As explained in comments filed today with our partners at the Center for Democracy and Technology, the Protect Democracy Project, and the Electronic Privacy Information Center, the GSA’s guidelines include broad provisions that would make AI tools less safe and less useful. If finally adopted, these provisions would become standard components of every federal contract. You can read the full comments here.

The most egregious example is a requirement that contractors and government service providers must license their AI systems to the government for “all lawful purposes.” Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we need serious and proactive legal restrictions to prevent it from gobbling up all the personal data it can acquire and using even routine bureaucratic data for punitive ends.

Relatedly, the draft rules require that “AI System(s) must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.” In other words, if a company’s safety guardrails might prevent responding to a government request, the company must disable those guardrails. Given widespread public concerns about AI safety, it seems misguided, at best, to limit the safeguards a company deems necessary.

There are myriad other problems with the draft rules, such as technologically incoherent “anti-Woke” requirements. But, the overarching problem is clear: much of this proposal would not serve the overall public interest in using American tax dollars to promote privacy, safety, and responsible technological innovation. The GSA should start over.

Corynne McSherry

Double Shot of Privacy's Defender in D.C.

1 week 3 days ago

You’re invited on a journey inside the privacy battles that shaped the internet. EFF’s Executive Director Cindy Cohn has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet.

Join Cindy at two events in Washingtion, D.C. on April 13 and 14 discussing her new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, on sale now. All proceeds from the book benefit EFF. Find the full event details below, and RSVP to let us know if you can make it.

April 13 - With Gigi Sohn at Busboys & Poets

Join longtime public advocate for universal, open and affordable networks Gigi Sohn, in conversation with EFF Executive Director Cindy Cohn for a discussion about Cindy's work, her new book, and what we're all wondering: Can have private conversations if we live our lives online?

Privacy's Defender at Busboys & Poets
Busboys & Poets - 14th & V
2021 14th St NW, Washington, DC 20009
Monday, April 13, 2026
6:30 pm to 8:30 pm

Register Now

April 14 - With Women in Security and Privacy (WISP)

Join Women in Security and Privacy (WISP) and EFF for a conversation featuring American University Senior Professorial Lecturer Chelsea Horne and EFF Executive Director Cindy Cohn as they dive into data security, Federal access to data, and your digital rights. 

Privacy's Defender with WISP
True Reformer Building - Lankford Auditorium
1200 U St NW, Washington, DC 20009
Tuesday, April 14, 2026
6:00 pm to 8:30 pm

REGISTER NOW

"Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions."

~Edward Snowden, whistleblower; author of Permanent Record

Can't make it? Look for Cindy at a city (or web connection) near you! Find the latest tour dates on the Privacy’s Defender hub or follow EFF for more.

Part memoir and part legal history for the general reader, Privacy’s Defender is a compelling testament to just how much privacy and free expression matter in our efforts to combat authoritarianism, grow democracy, and strengthen human rights. Thank you for being a part of that fight.

Want to support the cause and get a copy of the new book? New or renewing EFF members can preorder one as their annual gift!

Aaron Jue

Weakening Speech Protections Will Punish All of Us—Not Just Meta

1 week 4 days ago

Recently, a California Superior Court jury found that Meta and YouTube harmed a user through some of the features they offered. And a New Mexico jury concluded that Meta deceived young users into thinking its platforms were safe from predation. 

It’s clear that many people are frustrated by big tech companies and perhaps Meta in particular. We too have been highly critical of them and have pushed for years to end their harmful corporate surveillance. So it’s not surprising that a jury felt like Mark Zuckerberg and his company, along with YouTube, needed to be held accountable. 

While it would be easy to claim that these cases set a legal precedent that should make social media companies fearful, that’s not exactly true. And that’s actually a good thing for the internet and its users. 

These jury trials were just an early step in a long road through the court system. These cases will now go up on appeal, where the courts’ rulings about the First Amendment and immunity under Section 230 will likely get reconsidered. 

As we have argued many times before, the First Amendment protects both user speech and the choices platforms make on how to deliver that speech (in the same way it protects newspapers' right to curate their editorial pages as they see fit). Features on social media sites that are designed to connect users cannot be separated from the users’ speech, which is why courts have repeatedly held that these features are indeed protected. 

So while it may be tempting to celebrate these juries’ decisions as a "win" against big tech, in fact the ramifications of lowering First Amendment and immunity standards on other speakers—ones that members of the public actually like, and do not want to punish—are bad. We can’t create less protective speech rules for Meta and Google alone just because we want them held accountable for something else.

As we have often said, much of the anger against these companies arises from people rightfully feeling that these companies harvest and exploit their data, and monetize their lives for crass economic reasons. We therefore continue to urge Congress to pass a comprehensive national privacy law with a private right of action to address these core concerns.

David Greene

A Baseless Copyright Claim Against a Web Host—and Why It Failed

1 week 4 days ago

Copyright law is supposed to encourage creativity. Too often, it’s used to extract payouts from others.

Higbee & Associates, a law firm known for sending copyright demand letters to website owners, targeted May First Movement Technology, accusing it of infringing a photograph owned by Agence France-Presse (AFP). The claim was baseless. May First didn’t post the photo. It didn’t even own the website where the photo appeared.

May First is a nonprofit membership organization that provides web hosting and technical infrastructure to social justice groups around the world. The allegedly infringing image was posted years ago by one of May First’s members, a human rights group based in Mexico. When May First learned about the copyright complaint, it ensured that the group removed the image.

That should have been the end of it. Instead, the firm demanded payment.

So EFF stepped in as May First’s counsel and explained why AFP and Higbee had no valid claim. After receiving our response, Higbee backed down.

This outcome is a reminder that targets of copyright demands often have strong defenses—especially when someone else posted the material.

Hosting Content Isn’t the Same as Publishing It

Copyright law treats those who create or control content differently from those who simply provide the tools or infrastructure for others to communicate.

In this case, May First provided hosting services but didn’t post the photo. Courts have long recognized that service providers aren’t direct infringers when they merely store material at the direction of users. In those cases, service providers lack “volitional conduct”—the intentional act of copying or distributing the work.

Copyright law also recognizes that intermediaries can’t realistically police everything users upload. That’s why legal protections like the Digital Millennium Copyright Act safe harbors exist. Even outside those safe harbors, courts still shield service providers from liability when they promptly respond to notices.

May First did exactly what the law expects: it notified its member, and the image came down.

A Claim That Should Have Been Withdrawn Much Sooner

The troubling part of this story isn’t just that a demand was sent. It’s that Higbee and AFP continued to demand money and threaten litigation after May First explained that it was merely a hosting provider and had the image removed.

In other words, the claim was built on shaky legal ground from the start. Once May First explained its role, Higbee should have withdrawn its demand. Individuals and small nonprofits shouldn’t need lawyers just to stop aggressive copyright shakedowns.

Statutory Damages Fuel Copyright Abuse

This isn’t an isolated case—it’s a predictable result of copyright law’s statutory damages regime.

Statutory damages can reach $150,000 per work, regardless of actual harm. That enormous leverage incentivizes firms like Higbee to send mass demand letters seeking quick settlements. Even meritless claims can generate revenue when recipients are too afraid, confused, or resource-constrained to fight back.

This hits community organizations, independent publishers, and small service providers that don’t have in-house legal teams especially hard. Faced with the threat of ruinous statutory damages, many just pay what is demanded.

That’s not how copyright law should work.

Know Your Rights

If you receive a copyright demand based on material someone else posted, don’t assume you’re liable.

You may have defenses based on:

  • Your role as a hosting or service provider
  • Lack of volitional conduct
  • Prompt removal of the material after notice
  • The statute of limitations
  • The copyright owner’s failure to timely register the work
  • The absence of actual damages

Every situation is different, but the key point is this: a demand letter is not the same as a valid legal claim.

Standing Up to Copyright Trolls

May First stood its ground, and Higbee abandoned its demand after we explained the law.

But the bigger problem remains. Copyright’s statutory damages framework enables aggressive enforcement tactics that targets the wrong parties, and chills lawful online activity.

Until lawmakers fix these structural incentives, organizations and individuals will keep facing pressure to pay up—even when they’ve done nothing wrong.

If you get one of these demand letters, remember: you may have more rights than it suggests.

Betty Gedlu
Checked
6 hours 3 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed