We Need You: Our Privacy Cannot Afford a Clean Extension of Section 702

4 hours 51 minutes ago

We go through this every couple of years: Section 702 of the Foreign Intelligence Surveillance Act (FISA), which of Americans’ communications with foreign persons overseas is up for renewal. As always, Congress can reauthorize it with or without changes, or just let it expire. We know, we know, it’s a pain to have to do this every few years–but it gives us a chance to lift the hood of this behemoth tool of government surveillance and tinker with how it works. That’s why it’s so important right now to urge your Member of Congress not to pass any bill that reauthorizes Section 702 without substantial reforms.   

Take action

TELL congress: 702 Needs Reform

Section 702 is rife with problems, loopholes, and compliance issues that need fixing. The National Security Agency (NSA) collects full conversations being conducted by surveillance targets overseas and stores them, allowing the Federal Bureau of Investigation (FBI) to operate in a “finders keepers” mode of surveillance—they reason that it's already collected, so why can’t they look at those conversations? There, the FBI can query and even read the U.S. side of that communication without a warrant. The problem is, people who have been spied on by this program won’t even know and have very few ways of finding out. EFF and other civil liberties advocates have been trying for years to know when data collected through Section 702 is used as evidence against them.  

There’s simply no excuse for any Member of Congress to support a "clean" reauthorization of Section 702. Anyone who votes to do so does not take your privacy seriously. Full stop.  

The intelligence community and its defenders in Congress, as always, seem more interested in defending their rights to read your private communications than in protecting your right to privacy. It’s not really a compromise between safety and privacy if it's always your privacy that gets sacrificed. Now, we’re drawing a line in the sand: Congress cannot pass a clean extension.  

Use this EFF tool to write to your Member of Congress and tell them not to pass a clean reauthorization of Section 702.  

Take action

TELL congress: 702 Needs Reform

Matthew Guariglia

Yikes, Encryption’s Y2K Moment is Coming Years Early

21 hours 10 minutes ago

Google moved up its estimated deadline for quantum preparedness in cryptography to 2029—only 33 months from now. That’s earlier than previous deadlines, and they proposed the new post-quantum migration deadline because of two new papers that comprise a big jump in the state of the technology. It’s ahead of schedule, but not altogether unexpected. Cryptographers and engineers have been working on this for years, and as the deadline gets closer, it’s not surprising to see more precise timeline estimates come up.

The preparation for the Y2K bug is not a perfect analogy. Like Y2K, if systems are not updated in time, anyone with a powerful enough quantum computer will be able to more easily insert malware into the core systems of a computer and fake authentication to allow impersonation merely by observing network traffic. These are the threats whose mitigation timelines have been moved up.

But unlike Y2K, there’s a second sort of attack that we already need to be prepared for: quantum computers will be able to decrypt years of captured messages sent over encrypted messaging platforms shared any time before those platforms updated to quantum-proof encryption. That type of attack has been the main focus of engineering efforts so far and mitigation is well on its way, since anything before the upgrade might eventually be compromised.

Fortunately, not all cryptography is broken by quantum computers. Notably, symmetric encryption is quantum resistant. That means that if you have disk encryption turned on, you shouldn’t have to worry about quantum computers breaking into your phone, as long as your system’s keys are long enough. The problem is how you get the keys to do that encryption, and how you authenticate software on your device and in the cloud.

Engineers: Time to Lock In

For those whose work touches on any sort of cryptographic deployment, you’re hopefully already working on the post-quantum transition. If not, you really should be; there are quite a few relevant posts and updates with more information about what this news means for you. Your key agreement systems should be upgraded soon if they’re not already because of store-now-decrypt-later attacks. Now it’s time to prepare for authentication attacks on forged signatures as well.

In some cases, you may need to wait on others to finish their work first. If you’re using NGINX to host websites on Ubuntu, for example, the security settings you need to upgrade key agreement were just released in version 26.04. Updates are rolling out, so keep checking in and upgrade your systems as soon as you’re able to.

Users: Stay Updated, Check on Your Chats

But if you’re not in any position to be updating software or hardware, there may be some additional steps you can take to make sure you're as protected as possible. You’ll want to get the latest post-quantum protections as soon as they're available, so if you don't already have a habit of applying software updates in a timely manner, now’s a good time to start.

If you want to know if the website you’re using or the encrypted messaging app you’re chatting over will leak its data in a few years to anyone storing traffic now, you can search for its name with the word "quantum." The engineers are usually pretty proud of their work and have announced their post-quantum support (like what we’ve seen from Signal and iMessage). If you can’t find that information, you may want to have extra consideration for what you say over the internet, or switch the tools you're using. Those are the big areas to worry about now, before quantum computers are actually here, because they could result in the mass leakage of old messages.

The new deadline means that some technologies are simply not going to make it in time and will have to be left by the wayside, like trusted execution environments (TEEs), due to the slower speed of hardware deployments. TEEs are how companies do private processing on user data in the cloud, and they’re particularly relevant to AI offerings. 

Even now, though they offer more protection than processing data in the clear, TEEs are not as secure as homomorphic encryption or doing the processing on device. Post-quantum, the security level gets much closer to computation on cleartext, and even with strong user controls, that makes it way too easy to accidentally backdoor your own encrypted chats. If you’re worried about the contents of messages in an encrypted chat being exposed, you’ll probably want to completely avoid using AI features that might leak that content, such as summarization of recent chat history and notifications, and reply composition assistance. 

How’s the Transition Going So Far?

The work to update the world to post-quantum is well on its way. NIST finalized the standards for post-quantum cryptographic algorithms back in 2024. The larger platforms, websites, and hosting providers have already updated their algorithms, so even now, you’re probably already using post-quantum algorithms to access some of the internet. Measurements vary pretty widely, but up to about 4 in 10 websites currently support a post-quantum key exchange.

There’s still some work to be done in figuring out how to make the needed changes—for example, the way you find out a website’s private key to make HTTPS possible is being reworked to make room for larger signatures. Some technologies are just coming to market, like the post-quantum root of trust available now in some Chromebooks. In practice, this means that as you think about replacing your current devices in the next few years, you may want to check if you’re picking up hardware that has post-quantum support, if those specific protections are required for your threat model.

For the areas that still need updating, how much can we expect to actually get ready by the new deadline? It’s likely that not every cryptographically-capable device and deployment will be ready in time, and hardware with hard-coded certificates will probably be the last to update. We saw that happen when SHA-1 was deprecated; Point of Sale systems in particular were late adopters. While governments and large companies with quantum computers may not be interested in stealing money from cash registers, they will be interested in accessing secrets about people’s private lives. That’s why it’s so important that everyone does their part to upgrade, to protect the details of private communications and browsing. 

And there’s a good chance that older devices that won’t receive quantum-resistant updates were probably vulnerable to some other attack already. Quantum computation is just one type of attack on cryptography that’s notable for the scale of migration required, and how every public-key cryptosystem and authentication scheme has to do the work to prepare. That’s not a difference in kind, it’s a difference in scale, and some systems will inevitably be left behind.

Quantum preparedness hits different industries and services in different ways, but services that handle communications and financial information are particularly susceptible to risk, and need to act quickly to protect the privacy and security of billions of people.

Erica Portnoy

Comparison Shopping Is Not a (Computer) Crime

1 day 1 hour ago

As long as people have had more than one purchasing option, they’ve been comparing those options and looking for bargains. Online shoppers are no exception; in fact, one of the potential benefits of the internet is that it expands our options for everything from car rentals to airline tickets to dish soap. New AI tools can make the process even easier. These tools could provide some welcome relief for consumers facing sky-high prices that many cannot afford.

Unfortunately, Amazon is trying to block these helpful new tools, which can steer shoppers towards competitors. Taking a page from Facebook and RyanAir, they are trying to use computer crime laws to do it. 

Amazon’s target is Perplexity, which makes an AI-enabled web browser, called Comet, that allows users to browse the web as they normally would, but can also perform certain actions on the user’s behalf. For example, a user could ask Comet to find the best price on a 24-pack of toilet paper, and if satisfied with the results, have the browser order it. Amazon claims that Perplexity violated the Computer Fraud and Abuse Act (CFAA) by building a tool that helps users access information on Amazon and engage with the site.

Unfortunately, a federal district court agreed. The court’s fundamental mistake: relying on the Ninth Circuit’s misguided decision in Facebook v Power Ventures, rather than the court’s much better and more applicable reasoning in hiQ Labs.

Perplexity has appealed to the Ninth Circuit. As we explain in an amicus brief filed in support, the district court’s mistake, if affirmed, could lead to myriad unintended consequences. Overbroad readings of the CFAA have undermined research, security, competition, and innovation. For years, we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Amazon’s claims here, not least because most of Amazon’s website is publicly available.

The court’s approach would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may create separate accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, it can’t just ban the researchers from using the site; it can render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

A broad reading of CFAA in this case would also undermine competition by enabling companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

The Ninth Circuit should follow Van Buren’s lead and interpret the CFAA narrowly, as Congress intended. Website owners do not need new shields against independent accountability.

Related Cases: Facebook v. Power Ventures
Corynne McSherry

EFF is Leaving X

1 day 2 hours ago

After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.

The Numbers Aren’t Working Out

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. 

We Expected More

When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing

We called for: 

  • Transparent content moderation: Publicly shared policies, clear appeals processes, and renewed commitment to the Santa Clara Principles
  • Real security improvements: Including genuine end-to-end encryption for direct messages
  • Greater user control: Giving users and third-party developers the means to control the user experience through filters and interoperability.

Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them. 

"But You're Still on Facebook and TikTok?" 

Yes. And we understand why that looks contradictory. Let us explain. 

EFF exists to protect people’s digital rights. Not just the people who already value our work, have opted out of surveillance, or have already migrated to the fediverse. The people who need us most are often the ones most embedded in the walled gardens of the mainstream platforms and subjected to their corporate surveillance. 

Young people, people of color, queer folks, activists, and organizers use Instagram, TikTok, and Facebook every day. These platforms host mutual aid networks and serve as hubs for political organizing, cultural expression, and community care. Just deleting the apps isn't always a realistic or accessible option, and neither is pushing every user to the fediverse when there are circumstances like:

  • You own a small business that depends on Instagram for customers.
  • Your abortion fund uses TikTok to spread crucial information.
  • You're isolated and rely on online spaces to connect with your community.

Our presence on Facebook, Instagram, YouTube, and TikTok is not an endorsement. We've spent years exposing how these platforms suppress marginalized voices, enable invasive behavioral advertising, and flag posts about abortion as dangerous. We’ve also taken action in court, in legislatures, and through direct engagement with their staff to push them to change poor policies and practices.

We stay because the people on those platforms deserve access to information, too. We stay because some of our most-read posts are the ones criticizing the very platform we're posting on. We stay because the fewer steps between you and the resources you need to protect yourself, the better. 

We'll Keep Fighting. Just Not on X

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members’ support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we’re here to help you take back control.

Kenyatta Thomas

Banning New Foreign Routers Mistargets Products to Fix Real Problem

1 day 23 hours ago

On March 23, the FCC issued an update to their Covered List, a list of equipment banned from obtaining regulatory approval necessary for U.S. sale (and thus effectively a ban on sale of new devices), to include all new routers produced in foreign countries unless they are specifically given an exception by the Department of Defense (DoD) or DHS. The Commission cited “security gaps in foreign-made routers” leading to widespread cyberattacks as justification for the ban, mentioning the high-profile attacks by Chinese advanced persistent threat actors Volt, Flax, and Salt Typhoon. Although the stated intention is to stem the very real threat of domestic residential routers being commandeered to initiate attacks and act as residential proxies, this sweeping move serves as a blunt instrument that will impact many harmless products. In addition to being far too broad, it won’t even affect many vulnerable devices that are most active in these types of attacks: IoT and connected smart home devices.

Previously, the FCC had changed the Covered List to ban hardware by specific vendors, such as telecom equipment produced by companies Huawei and Hytera in 2021. This new blanket ban, in contrast, affects the importation and sale of almost all new consumer routers. It does not affect consumer routers produced in the United States, like Starlink in Texas. While some of the affected routers will be vulnerable to compromises that hijack the devices and use them for cybercrime and attacks, this ban does not distinguish between companies with a track-record of producing vulnerable products and those without. As a result, instead of incentivizing security-minded production, this will only limit the options consumers have to US-based manufacturers not affected by the ban—even those that lack stellar security reputations themselves.

While the sale of vulnerable routers in the U.S. will not stop, the announcement quoted an Executive Branch determination that foreign produced routers introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense.” Yet this move does nothing to address the growing number of connected devices involved in the attacks this ban aims to address. As we have previously pointed out, supply chain attacks have resulted in no-name Android TV boxes preloaded with malware, sold by retail giants like Amazon, fuelling the massive Kimwolf and BADBOX 2 fraud and residential proxy botnets. Banning the specific models and manufacturers we know produce dangerous devices putting its purchasers at risk, rather than issuing blanket bans punishing reputable brands that do better, should be the priority.

With the FCCs top commissioner appointed by the President, this ban comes as other parts of the administration impose tariffs and issue dozens of trade-related executive orders aimed at foreign goods. A few larger companies with pockets deep enough to invest in manufacturing plants within the U.S. may see this as an opportune moment, while others not as well poised to begin U.S. operations may attempt to curry enough favor to be added to the DoD or DHS exception lists. At best, this will result in the immediate effect of an ill-targeted policy that does little to improve domestic cybersecurity posture. At worst, it entrenches existing players and deepens problematic quid-pro-quo arrangements.

American consumers deserve better. They deserve the assurance that the devices they use, whether routers or other connected smart home devices, are built to withstand attacks that put themselves and others at risk, no matter where they are manufactured. For this, a nuanced, careful consideration of products (such as was part of the FCC’s 2023-proposed U.S. Cyber Trust Mark) is necessary, rather than blanket bans.

Bill Budington

Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law

2 days ago

Another court has ruled that copyright can’t be used to keep our laws behind a paywall. The U.S. Court of Appeals for the Third Circuit upheld a lower court’s ruling that it is fair use to copy and disseminate building codes that have been incorporated into federal and state law, even though those codes are developed by private parties who claim copyright in them. The court followed the suggestions EFF and others presented in an amicus brief, and joined a growing list of courts that have placed public access to the law over private copyright holders’ desire for control.

UpCodes created a database of building codes—like the National Electrical Code—that includes codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law, and therefore has the right to control how the public accesses and shares them. Fortunately, neither the Constitution nor the Copyright Act support that theory. Faced with similar claims, some courts, including the Fifth Circuit Court of Appeals, have held that the codes lose copyright protection when they are incorporated into law. Others, like the D.C. Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that, whether or not the legal status of the standards changes once they are incorporated into law, making them fully accessible and usable online is a lawful fair use.

In this case, the Third Circuit found that UpCodes’s copying of the codes was a fair use, in a decision closely following the D.C. Circuit’s reasoning. Fair use turns on four factors listed in the Copyright Act, and the court found that all four favored UpCodes to some degree.

On the first factor, the purpose and character of the use, the court found that UpCodes’s use was “transformative” because it had a separate and distinct purpose from ASTM—informing people about the law, rather than just best practices in the building industry. No matter that UpCodes was copying and disseminating entire safety codes verbatim—using the codes for a different purpose was enough. And UpCodes being a commercial venture didn’t change the outcome either, because UpCodes wasn’t charging for access to the codes.

On the second factor, the nature of the copyrighted work, the Third Circuit joined other appeals courts in finding that laws are facts, and stand at “the periphery of copyright’s core protection.” And this included codes that were “indirectly” incorporated—meaning that they were incorporated into other codes that were themselves incorporated into law.

The third factor looks at the amount and substantiality of the material used. The court said that UpCodes could not have accomplished its purpose—providing access to the current binding laws governing building construction—without copying entire codes, so the copying was justified. Importantly, the court noted that UpCodes was justified in copying optional parts of the codes as well as “mandatory” sections because both help people understand what the law is.

Finally, the fourth factor looks at potential harm to the market for the original work, balanced against the public interest in allowing the challenged use. The court rejected an argument frequently raised by copyright holders—that harm can be assumed any time materials are posted to the internet for all to access. Instead, the court held that when a use is transformative, a rightsholder has to bring evidence of harm, and that harm will be balanced against the public benefit. Because “enhanced public access to the law is a clear and significant public benefit,” and ASTM hadn’t shown significant evidence that UpCodes had meaningfully reduced ASTM’s revenues, the fourth factor was at least neutral. It didn’t matter to the court that ASTM offered to provide copies of legally binding standards to the public on request, because “the mere possibility of obtaining a free technical standard does not nullify the public benefits associated with enhanced access to law.”

This is a good result that will expand the public’s access to the laws that bind us—something that’s more important than ever given recent assaults on the rule of law. In the future, we hope that courts will recognize that codes and standards lose copyright when they are incorporated into law, so that people don’t have to spend years and legal fees litigating fair use just to exercise their rights.

Mitch Stoltz

👁 Selling Mass Surveillance | EFFector 38.7

2 days 2 hours ago

Time and time again, we've seen police surveillance suffer from 'mission creep'—technology sold as a way to prevent heinous crimes ends up enforcing traffic violations, tracking protestors, and more. In our latest EFFector newsletter, we're diving into this troubling pattern and sharing all the latest in the fight for privacy and free speech online.

JOIN OUR NEWSLETTER

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This week's issue covers the urgent need to reform NSA spying; a victory for internet access in the Supreme Court; and how license plate readers are normalizing mass surveillance.

Prefer to listen in? EFFector is now available on all major podcast platforms. This time, we're chatting with EFF Privacy Litigation Director Adam Schwartz about some of the recent technologies we've seen suffer from "mission creep." And don't miss the EFFector news quiz! You can find the episode and subscribe on your podcast platform of choice

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F2ff7f80b-1fbe-4013-97b6-43873a6785ac%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

Want to help us push back against mass surveillance? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight for privacy and free speech online when you support EFF today!

Christian Romero

Digital Hopes, Real Power: How the Arab Spring Fueled a Global Surveillance Boom

2 days 10 hours ago

This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the first post here, and the second here.

When people recall the 2011 uprisings across the Middle East and North Africa (MENA), they often picture crowded squares, raised phones, and the feeling that the internet had finally shifted the balance of power toward ordinary people. But the past decade and a half is also a story about how governments, companies, and platforms turned those same tools into the backbone of a powerful state surveillance apparatus.

For activists, journalists, everyday users, that means now living with a constant threat. The phone in your pocket, the platforms you organize on, and the systems you rely on for safety and connection can be weaponized at the flip of a switch. A global surveillance industry has treated repression by many MENA governments as a growth opportunity, and the tactics refined there now shape digital authoritarianism worldwide. This essay traces how that shift unfolded: security agencies upgraded older systems of repression with new surveillance tools and permanent monitoring infrastructure; cybercrime laws and mercenary spyware markets turned digital control into standard operating procedure; and biometrics, facial recognition, and ‘smart city’ projects laid the groundwork for AI‑driven surveillance that now shapes protests, borders, and everyday life far beyond the region. 

Remembering the Arab Spring means seeing the events of 2011 as both a remarkable moment of movement history when people leveraged networked tools in their fight for freedom and the beginning of a long, grinding effort to turn those same tools into mechanisms of state control.

Old‑School Repression, New‑School Tools

Long before Facebook and Twitter, regimes in countries like Egypt and Syria already knew how to crush dissent. They leaned on informant networks, physical surveillance, and wiretaps, backed by emergency laws that let security agencies monitor and detain critics with almost no restraint. Research on the use of surveillance technology in MENA shows that, even before the Arab Spring, states were layering early digital tools like internet monitoring, deep packet inspection, and interception centers on top of that older machinery of control.

At the same time, connectivity was racing ahead. Cheap smartphones and social media suddenly let people share information at scale, coordinate protests, and broadcast abuses in real time. In 2011, EFF described both the excitement around “Facebook revolutions” and the early signs that governments were scrambling to upgrade their capacity to watch and disorganize popular dissent.

After the uprisings, Western critics endlessly debated how much credit to give social media itself. While in the background, security agencies across several MENA states reached a much simpler conclusion: if networked communication can help topple a dictator, then they needed to embed themselves deep inside those networks. Analyses of the rise of digital authoritarianism in MENA show how quickly officials pivoted from being surprised by online organizing to building systems to monitor and pre‑empt it.

In the years after 2011, governments across the region poured money into tools that let them systematically watch what people said and did on major platforms. Foreign vendors set up monitoring centers and interception systems that let security agencies block tens of thousands of sites, scrape and analyze social media at scale, monitor activist pages and online communities, and track activists in real time. They built a new, pre‑emptive model of digital control, one that assumes the state should see as much as possible, as early as possible.

As we noted in 2011, exporting permanent surveillance infrastructure to already‑abusive governments doesn’t “modernize” public safety; it locks in an architecture of control that is primed to abuse dissidents, journalists, and marginalized communities.

Domestic Lawfare and Cyber-Mercenaries

After the uprisings, a number of governments also rewrote the rules that govern online life. Cybercrime laws, “fake news” provisions, and overbroad public‑order and ‘morality’ offences gave prosecutors and security agencies legal cover to act with impunity. Governments in Saudi Arabia, Tunisia, Jordan, and Egypt combined counterterrorism, cybercrime, defamation, and protest laws into a legal thicket designed to make online dissent feel dangerous and costly. Morality laws and cybercrime provisions are used to target queer and trans people based on identity and expression.​

At the United Nations, a new global cybercrime convention now risks baking this logic into international law. The convention was adopted by the UN General Assembly in late 2024, despite serious human rights concerns raised by civil society. Echoing our partners, EFF warned at the time that the UN cybercrime draft convention remained too flawed to adopt and urged states to reject the draft language because it legitimized expansive surveillance powers and criminalized legitimate expression, security research, and everyday digital practices around the world. While on paper, these instruments gesture to “public safety” objectives, in practice they function as pathways for state security agencies to monitor, prosecute, and silence the communities most at risk. For state-targeted communities, that makes being visible online a calculated risk, not a neutral choice.​​

Criminal codes are only half the story; mercenary tech is the other. As governments worldwide looked for ways to outpace their critics, a parallel market emerged to help them infiltrate and take over devices. Companies like NSO Group marketed Pegasus and similar tools as off‑the‑shelf capabilities for governments that wanted to hack a target’s cellphones or other devices to read messages, turn on microphones, and monitor entire social networks while bypassing the courts. 

In 2019, UN Special Rapporteur David Kaye called for a global moratorium on the sale and transfer of private surveillance tools until real, enforceable safeguards exist. Two years later, forensic work by Amnesty and media partners showed how the same spyware used to hack phones of Palestinian human‑rights defenders was used to surveil journalists, activists, lawyers, and political opponents across dozens of countries

Regional groups responded by demanding an end to the sale of surveillance technology to autocratic governments and security agencies, arguing that you cannot keep selling “lawful intercept” tools into systems where law itself is an instrument of repression. Commercial spyware is at the center of digital repression, not at its margins. Surveillance vendors are not neutral suppliers. Safeguards remain weak, fragmented, or nonexistent in most of the countries buying these tools, yet vendors continue seeking new contracts and new militarized “use cases.” Put bluntly, the companies that design, market, and maintain these systems precisely because they enable this kind of control profit from (and help entrench) authoritarian power.

Biometrics, Facial Recognition, and AI‑Powered Surveillance Cities

On top of this rapidly intensifying interception and spyware stack, governments and companies began layering biometrics and face recognition into everyday systems, creating pathways for bulk data collection, automated analysis, and risk profiling. In parts of MENA, national ID schemes, border and migration controls, and centralized biometric databases have been rolled out in environments with weak or captured data‑protection laws, making it easy to link people’s movements, services, and political activity to a single, persistent identifier.​

Humanitarian programs are not exempt from this protocol. In Jordan, Syrian refugees have been required to submit iris scans and biometric data to access cash assistance and food, turning “consent” into a precondition for survival. When access to aid depends on enrollment in centralized biometric systems, any breach, misuse, or repurposing of that data can have severe, life‑altering consequences for people who have no realistic way to opt out. Investigations into surveillance‑tech firms complicit in abuses in MENA show that vendors profit from supplying biometric and surveillance tools for migration management and internal security, even when those tools are used in discriminatory or abusive ways.​

Like elsewhere, mass surveillance technologies in MENA were first piloted on people who were already criminalized or made vulnerable by poverty. But their use quickly expanded from narrow, security‑framed deployments to routine use in city streets. As hardware sensors, cameras, and data storage got cheaper, “smart city” surveillance systems promised seamless security and services, and it became easier and less politically contentious to keep these systems running everywhere, all the time.​

Unlike targeted hacking tools, these broad, city‑wide surveillance infrastructures erase any practical line between people under investigation and the broad public, normalizing bulk, indiscriminate monitoring of public space and everyday movement. In the Gulf, facial recognition and dense sensor networks are increasingly built into high‑profile “smart city” and mega‑project plans that lean heavily on biometric and AI‑driven monitoring. These are security‑first development projects where biometric and sensor infrastructures are designed from the outset to embed policing, migration control, and commercial tracking into the urban fabric. In this vision of the Gulf’s “smart city” future—often sold as seamless services and digital opportunity—“smart” is the branding, and pervasive monitoring is the operating principle.​​

EFF has consistently opposed government use of face recognition and biometric surveillance, in some instances calling for outright bans. In contexts that treat peaceful dissent as a security threat, embedding biometric surveillance into everyday infrastructure locks in a balance of power that favors militarized policing and state control. That infrastructure is now the starting point for a new set of risks. Surveillance systems built over the last decade are being repackaged as the foundation for a new generation of “AI‑enabled” defense and security products. 

Companies that once focused on video management or perimeter security now advertise “defense applications” for AI‑driven situational awareness and threat detection, using computer‑vision models to scan camera feeds, compare against existing watchlists, and flag “suspicious” people or behaviors in real time. Drone and sensor platforms are being upgraded with embedded AI that tracks and classifies targets autonomously and with “drone‑based AI threat detection and intelligent situational awareness,” turning aerial surveillance into a continuous data feed for security agencies and militaries. In smart‑city and defense expos from the Gulf to Europe and North America, similar systems are marketed as neutral efficiency upgrades or tools to “protect critical infrastructure,” even where they are explicitly designed to scale up border enforcement, protest surveillance, and internal security operations.

As these systems are folded into AI‑driven defense products, the line between “civilian” infrastructure and militarized surveillance disappears, turning streets, borders, and aid sites into continuous input for security operations. That is the landscape that human rights and accountability efforts now have to confront.

Templates of Control, Networks of Resistance

The patterns established in heavily securitized MENA states after the Arab Spring now shape how states monitor and crush more recent uprisings, from Iran’s use of location data and facial recognition to track down protesters to long‑running crackdowns elsewhere in the region. This model of “digital authoritarianism” built on spyware, data‑hungry ID systems, platform control, and emergency‑style security laws has emerged everywhere from Latin America to Eastern Europe to here in the United States. As the new UN Cybercrime Convention moves toward implementation, its broad offences and surveillance powers risk turning this ad hoc toolkit into a formal template for cross‑border data‑sharing, repression, and an all‑purpose global surveillance instrument.

For people on the ground, none of this is theoretical. Human‑rights defenders, journalists, and ordinary users across the region face arrest, long prison sentences, and exile based on their digital traces. In that context, commercial spyware is not a marginal issue but part of the core machinery of repression. Pegasus has been used to hack journalists’ phones through zero‑click exploits and compromise human‑rights defenders and watchdog organizations themselves, including staff at Amnesty’s Pegasus Project partners and Human Rights Watch. These deployments give practical effect to the “cybercrime” and “terrorism” frameworks described earlier: person‑by‑person campaigns against particular communities, contacts, and networks, rather than “neutral,” generalized security measures.

Under these conditions, everyday security becomes a second job. People describe carrying multiple phones, keeping one for relatively “clean” uses and others for riskier conversations, splitting identities across platforms, using coded language, and moving their organizing off mainstream services when possible. Pushing this burden onto users is a political choice: states, platforms, and vendors could build systems that are safe by design; instead, they externalize risk to the people they watch and punish.

Even against that backdrop, civil society organizations have refused to capitulate to security agencies and vendors. Regional coalitions have demanded strict export controls and outright bans on selling intrusive surveillance tech to autocratic governments. Advocates have also pushed companies to do more than box‑ticking “due diligence.” Work with surveillance‑tech firms in the context of migration and border control has repeatedly shown that most are still far from serious human‑rights assessments, let alone willing to turn down these lucrative contracts.

Many of the same governments that have been critical of others on the issue of human rights have hosted or licensed companies that build these tools, in some cases buying similar capabilities for their own security agencies. European authorities, for instance, have investigated FinFisher’s export of spyware “made in Germany” to Turkey and other non‑EU governments. Meanwhile, the NSO Group has at least 22 Pegasus contracts with security and law‑enforcement agencies in 12 EU countries. This is a transnational industry, not a localized problem.

Against near impossible odds, people continue finding pathways to freedom. The global surveillance sector reinforces the same hierarchies and violence that people have found ways to survive for generations. Queer activists and others at the sharpest edges of this system have had to develop their own forms of resistance, including against biometric and data‑driven targeting. Encryption, circumvention tools, and security training are not silver bullets, but they remain essential for anyone trying to organize, document abuses, or simply exist online with a bit less risk. Resources like EFF’s Surveillance Self‑Defense are one piece of that ecosystem, alongside trainers and groups who have been doing this work on the ground for years.​

Defending the Future of Digital Dissent

The Arab Spring is often remembered through images of packed squares and hopeful tweets. But contending with its aftermath means confronting the surveillance architecture built in its shadow: laws that turn online speech into a crime, spyware and biometric systems that turn phones and faces into tracking beacons, and platform practices that routinely sacrifice the people most at risk. None of that is inevitable, and none of it is confined to one part of the world.

Accountability has to reach both governments and the companies that profit from arming them with these tools. That means pushing for far stronger limits on how surveillance tech is built, sold, and deployed; demanding meaningful transparency when these systems are used; and defending the tools people rely on to communicate and organize safely, including robust encryption and secure channels. It also means taking direction from the people and communities who have been navigating and resisting this landscape for years.

Surveillance itself is transnational: tools, playbooks, and data moves across borders as easily as money. And so we, too, continue our work, documenting abuses, sharing security knowledge, and collectively organizing against these violent systems.

This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

Sarah Hamid

EU Parliament Blocks Mass-Scanning of Our Chats—What's Next?

2 days 23 hours ago

The EU’s so-called Chat Control plan, which would mandate mass scanning and other encryption breaking measures, has had some good news lately. The most controversial idea, the forced requirement to scan encrypted messages, was given up by EU member states. And now, another win for privacy: the EU Parliament has dealt a real blow to voluntary mass-scanning of chats by voting to not prolong an interim derogation from e-Privacy rules in the EU. These rules allowed service providers, temporarily, to scan private communication.  

But no one should celebrate just yet. We said there is more to it, and voluntary scanning is a key part. Unlike in the U.S., where there is no comprehensive federal privacy law, the general and indiscriminate scanning of people’s messages is not legal in the EU without a specific legal basis. The e-Privacy derogation law, which gave (limited) cover for such activities, has now expired. Does that mean mass scanning will stop overnight?  

Not really. 

Companies have continued similar scanning practices during past gaps. Google, Meta, Microsoft, and Snap have already signaled in a joint statement to “continue to take voluntary action on our relevant Interpersonal Communication Services.” Whether this indicates continued scanning of our private communication is not entirely clear, but what is clear is that such activity would now risk breaching EU law. Then again, lack of compliance with EU data protection and privacy rules is nothing new for big tech in Europe. 

Most importantly, the “Chat Control” proposal for mandatory detection of child abuse material (CSAM) is still alive and being negotiated. It has shifted the focus toward so-called risk mitigation measures, such as problematic age verification and voluntary activities. If platforms are expected to adopt these as part of their compliance, they risk no longer being truly voluntary. While mass scanning may be gone on paper, some broader concerns remain.  

So, where does this leave us? The immediate priority is to make sure the expired exception for mass scanning is not revived. At the same time, lawmakers need to pull the teeth from the currently negotiated Chat Control proposal by narrowing risk mitigation measures. This means ensuring that age verification does not become a default requirement and “voluntary activities” are not turned into an expectation to scan our communications.   

As we said before, this is a zombie proposal. It keeps coming back and must not be allowed to return through the back door. 

Christoph Schmon

Triple Header for Privacy’s Defender in New York

6 days 19 hours ago

You’re invited on a journey inside the privacy battles that shaped the internet. EFF’s Executive Director Cindy Cohn has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet.

Join Cindy at three events in New York discussing her bestselling new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, on sale now. All proceeds from the book benefit EFF. Find the full event details below, and RSVP to let us know if you can make it.

April 20 - With Women in Security and Privacy (WISP)

Join Women in Security and Privacy (WISP) and EFF for a conversation featuring American University Senior Professorial Lecturer Chelsea Horne and EFF Executive Director Cindy Cohn as they dive into data security, Federal access to data, and your digital rights.


Privacy's Defender with WISP
Kennedys
22 Vanderbilt Avenue, Suite 2400, New York, NY 10017
Monday, April 20, 2026
6:00 pm to 8:00 pm
REGISTER NOW


April 21 - With Julie Samuels at Civic Hall

Join Tech:NYC President and CEO Julie Samuels, in conversation with EFF Executive Director Cindy Cohn for a discussion about Cindy's work, her new book, and what we're all wondering: Can have private conversations if we live our lives online?


Privacy's Defender at Civic Hall
Civic Hall
124 E 14th St, New York, NY 10003
Tuesday, April 21, 2026
6:00 pm to 9:00 pm
REGISTER NOW


April 23 - With Anil Dash at Brooklyn Public Library

Join antitech Principal & Cofounder Anil Dash, in conversation with EFF Executive Director Cindy Cohn to discuss Cindy's new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance.


Privacy's Defender at Brooklyn Public Library
Brooklyn Public Library - Central Library, Info Commons Lab
10 Grand Army Plz 1st floor, Brooklyn, NY 11238
Thursday, April 23, 2026
6:00 pm to 7:30 pm
REGISTER NOW


"Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions."
~Edward Snowden, whistleblower; author of Permanent Record

Can't make it? Look for Cindy at a city (or web connection) near you! Find the latest tour dates on the Privacy’s Defender hub or follow EFF for more.

Part memoir and part legal history for the general reader, Privacy’s Defender is a compelling testament to just how much privacy and free expression matter in our efforts to combat authoritarianism, grow democracy, and strengthen human rights. Thank you for being a part of that fight.

Want to support the cause and get a copy of the new book? New or renewing EFF members can preorder one as their annual gift!

Aaron Jue

The FAA’s “Temporary” Flight Restriction for Drones is a Blatant Attempt to Criminalize Filming ICE

6 days 20 hours ago

Legal intern Raj Gambhir was the principal author of this post.

The Trump administration has restricted the First Amendment right to record law enforcement by issuing an unprecedented nationwide flight restriction preventing private drone operators, including professional and citizen journalists, from flying drones within half a mile of any ICE or CBP vehicle.

In January, EFF and media organizations including The New York Times and The Washington Post responded to this blatant infringement of the First Amendment by demanding that the FAA lift this flight restriction. Over two months later, we’re still waiting for the FAA to respond to our letter.

The First Amendment guarantees the right to record law enforcement. As we have seen with the extrajudicial killings of George Floyd, Renée Good, and Alex Pretti, capturing law enforcement on camera can drive accountability and raise awareness of police misconduct.

A 21-Month Long “Temporary” Flight Restriction?

The FAA regularly issues temporary flight restrictions (TFRs) to prevent people from flying into designated airspace. TFRs are usually issued during natural disasters, or to protect major sporting events and government officials like the president, and in most cases last mere hours.

Not so with the restriction numbered FDC 6/4375, which started on January 16, 2026. This TFR lasts for 21 months—until October 29, 2027—and covers the entire nation. It prevents any person from flying any unmanned aircraft (i.e., a drone) within 3000 feet, measured horizontally, of any of the “facilities and mobile assets,” including “ground vehicle convoys and their associated escorts,” of the Departments of Defense, Energy, Justice, and Homeland Security. Violators can be subject to criminal and civil penalties, and risk having their drones seized or destroyed.

In practical terms, this TFR means that anyone flying their drone within a half mile of an ICE or CBP agent’s car (a DHS “mobile asset”) is liable to face criminal charges and have their drone shot down. The practical unfairness of this TFR is underscored by the fact that immigration agents often use unmarked rental cars, use cars without license plates, or switch the license plates of their cars to carry out their operations. Nor do they provide prior warning of those operations.

The TFR is an Unconstitutional Infringement of Free Speech

While the FAA asserts that the TFR is grounded in its lawful authority, the flight restriction not only violates multiple constitutional rights, but also the agency’s own regulations.

First Amendment violation. As we highlighted in the letter, nearly every federal appeals court has recognized the First Amendment right of Americans to record law enforcement officers performing their official duties. By subjecting drone operators to criminal and civil penalties, along with the potential destruction or seizure of their drone, the TFR punishes—without the required justifications—lawful recording of law enforcement officers, including immigration agents.  

Fifth Amendment violation. The Fifth Amendment guarantees the right to due process, which includes being given fair notice before being deprived of liberty or property by the government. Under the flight restriction, advanced notice isn’t even possible. As discussed above, drone operators can’t know whether they are within 3000 horizontal feet of unmarked DHS vehicles. Yet the TFR allows the government to capture or even shoot down a drone if it flies within the TFR radius, and to impose criminal and civil penalties on the operator.

Violations of FAA regulations. In issuing a TFR, the FAA’s own regulations require the agency to “specify[] the hazard or condition requiring” the restriction. Furthermore, the FAA must provide accredited news representatives with a point of contact to obtain permission to fly drones within the restricted area. The FAA has satisfied neither of these requirements in issuing its nationwide ban on drones getting near government vehicles.

EFF Demands Rescission of the TFR

We don’t believe it’s a coincidence that the TFR was put in place in January 2026, at the height of the Minneapolis anti-ICE protests, shortly after the killing of Renée Good and shortly before the shooting of Alex Pretti. After both of those tragedies, civilian recordings played a vital role in contradicting the government’s false account of the events.

By punishing civilians for recording federal law enforcement officers, the TFR helps to shield ICE and other immigration agents from scrutiny and accountability. It also discourages the exercise of a key First Amendment right. EFF has long advocated for the right to record the police, and exercising that right today is more important than ever.

Finally, while recording law enforcement is protected by the First Amendment, be aware that officers may retaliate against you for exercising this right. Please refer to our guidance on safely recording law enforcement activities.

Update: The Reporters Committee for Freedom of the Press (RCFP) has filed a petition for review in the D.C. Circuit (Levine v. FAA).

Sophia Cope

Tech Nonprofits to Feds: Don’t Weaponize Procurement to Undermine AI Trust and Safety

1 week ago

While the very public fight continues between the Department of Defense and Anthropic over whether the government can punish a company for refusing to allow its technology to be used for mass surveillance, another agency of the U.S. government is quietly working to ensure that this dispute will never happen again. How? By rewriting government procurement rules.

Using procurement — meaning, the processes by which governments acquire goods and services  to accomplish policy goals is a time-honored and often appropriate strategy. The government literally expresses its politics and priorities by deciding where and how it spends its money. To that end, governments can and should give our tax dollars to companies and projects that serve the public interest, such as open-source software development, interoperability, or right to repair. And they should withhold those dollars from those that don’t, like shady contractors with inadequate security systems.

New proposed rules for the principal agency in charge of acquiring goods, property, and services for the federal government, the General Services Administration (GSA), are supposed to be primarily an effort to implement one policy priority: promoting “ideologically neutral” American AI innovation. But the new guidelines do far more than that.

As explained in comments filed today with our partners at the Center for Democracy and Technology, the Protect Democracy Project, and the Electronic Privacy Information Center, the GSA’s guidelines include broad provisions that would make AI tools less safe and less useful. If finally adopted, these provisions would become standard components of every federal contract. You can read the full comments here.

The most egregious example is a requirement that contractors and government service providers must license their AI systems to the government for “all lawful purposes.” Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we need serious and proactive legal restrictions to prevent it from gobbling up all the personal data it can acquire and using even routine bureaucratic data for punitive ends.

Relatedly, the draft rules require that “AI System(s) must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.” In other words, if a company’s safety guardrails might prevent responding to a government request, the company must disable those guardrails. Given widespread public concerns about AI safety, it seems misguided, at best, to limit the safeguards a company deems necessary.

There are myriad other problems with the draft rules, such as technologically incoherent “anti-Woke” requirements. But, the overarching problem is clear: much of this proposal would not serve the overall public interest in using American tax dollars to promote privacy, safety, and responsible technological innovation. The GSA should start over.

Corynne McSherry

Double Shot of Privacy's Defender in D.C.

1 week ago

You’re invited on a journey inside the privacy battles that shaped the internet. EFF’s Executive Director Cindy Cohn has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet.

Join Cindy at two events in Washingtion, D.C. on April 13 and 14 discussing her new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, on sale now. All proceeds from the book benefit EFF. Find the full event details below, and RSVP to let us know if you can make it.

April 13 - With Gigi Sohn at Busboys & Poets

Join American Association of Public Broadband (AAPB) Executive Director Gigi Sohn, in conversation with EFF Executive Director Cindy Cohn for a discussion about Cindy's work, her new book, and what we're all wondering: Can have private conversations if we live our lives online?

Privacy's Defender at Busboys & Poets
Busboys & Poets - 14th & V
2021 14th St NW, Washington, DC 20009
Monday, April 13, 2026
6:30 pm to 8:30 pm

Register Now

April 14 - With Women in Security and Privacy (WISP)

Join Women in Security and Privacy (WISP) and EFF for a conversation featuring American University Senior Professorial Lecturer Chelsea Horne and EFF Executive Director Cindy Cohn as they dive into data security, Federal access to data, and your digital rights. 

Privacy's Defender with WISP
True Reformer Building - Lankford Auditorium
1200 U St NW, Washington, DC 20009
Tuesday, April 14, 2026
6:00 pm to 8:30 pm

REGISTER NOW

"Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions."

~Edward Snowden, whistleblower; author of Permanent Record

Can't make it? Look for Cindy at a city (or web connection) near you! Find the latest tour dates on the Privacy’s Defender hub or follow EFF for more.

Part memoir and part legal history for the general reader, Privacy’s Defender is a compelling testament to just how much privacy and free expression matter in our efforts to combat authoritarianism, grow democracy, and strengthen human rights. Thank you for being a part of that fight.

Want to support the cause and get a copy of the new book? New or renewing EFF members can preorder one as their annual gift!

Aaron Jue

Weakening Speech Protections Will Punish All of Us—Not Just Meta

1 week ago

Recently, a California Superior Court jury found that Meta and YouTube harmed a user through some of the features they offered. And a New Mexico jury concluded that Meta deceived young users into thinking its platforms were safe from predation. 

It’s clear that many people are frustrated by big tech companies and perhaps Meta in particular. We too have been highly critical of them and have pushed for years to end their harmful corporate surveillance. So it’s not surprising that a jury felt like Mark Zuckerberg and his company, along with YouTube, needed to be held accountable. 

While it would be easy to claim that these cases set a legal precedent that should make social media companies fearful, that’s not exactly true. And that’s actually a good thing for the internet and its users. 

These jury trials were just an early step in a long road through the court system. These cases will now go up on appeal, where the courts’ rulings about the First Amendment and immunity under Section 230 will likely get reconsidered. 

As we have argued many times before, the First Amendment protects both user speech and the choices platforms make on how to deliver that speech (in the same way it protects newspapers' right to curate their editorial pages as they see fit). Features on social media sites that are designed to connect users cannot be separated from the users’ speech, which is why courts have repeatedly held that these features are indeed protected. 

So while it may be tempting to celebrate these juries’ decisions as a "win" against big tech, in fact the ramifications of lowering First Amendment and immunity standards on other speakers—ones that members of the public actually like, and do not want to punish—are bad. We can’t create less protective speech rules for Meta and Google alone just because we want them held accountable for something else.

As we have often said, much of the anger against these companies arises from people rightfully feeling that these companies harvest and exploit their data, and monetize their lives for crass economic reasons. We therefore continue to urge Congress to pass a comprehensive national privacy law with a private right of action to address these core concerns.

David Greene

A Baseless Copyright Claim Against a Web Host—and Why It Failed

1 week ago

Copyright law is supposed to encourage creativity. Too often, it’s used to extract payouts from others.

Higbee & Associates, a law firm known for sending copyright demand letters to website owners, targeted May First Movement Technology, accusing it of infringing a photograph owned by Agence France-Presse (AFP). The claim was baseless. May First didn’t post the photo. It didn’t even own the website where the photo appeared.

May First is a nonprofit membership organization that provides web hosting and technical infrastructure to social justice groups around the world. The allegedly infringing image was posted years ago by one of May First’s members, a human rights group based in Mexico. When May First learned about the copyright complaint, it ensured that the group removed the image.

That should have been the end of it. Instead, the firm demanded payment.

So EFF stepped in as May First’s counsel and explained why AFP and Higbee had no valid claim. After receiving our response, Higbee backed down.

This outcome is a reminder that targets of copyright demands often have strong defenses—especially when someone else posted the material.

Hosting Content Isn’t the Same as Publishing It

Copyright law treats those who create or control content differently from those who simply provide the tools or infrastructure for others to communicate.

In this case, May First provided hosting services but didn’t post the photo. Courts have long recognized that service providers aren’t direct infringers when they merely store material at the direction of users. In those cases, service providers lack “volitional conduct”—the intentional act of copying or distributing the work.

Copyright law also recognizes that intermediaries can’t realistically police everything users upload. That’s why legal protections like the Digital Millennium Copyright Act safe harbors exist. Even outside those safe harbors, courts still shield service providers from liability when they promptly respond to notices.

May First did exactly what the law expects: it notified its member, and the image came down.

A Claim That Should Have Been Withdrawn Much Sooner

The troubling part of this story isn’t just that a demand was sent. It’s that Higbee and AFP continued to demand money and threaten litigation after May First explained that it was merely a hosting provider and had the image removed.

In other words, the claim was built on shaky legal ground from the start. Once May First explained its role, Higbee should have withdrawn its demand. Individuals and small nonprofits shouldn’t need lawyers just to stop aggressive copyright shakedowns.

Statutory Damages Fuel Copyright Abuse

This isn’t an isolated case—it’s a predictable result of copyright law’s statutory damages regime.

Statutory damages can reach $150,000 per work, regardless of actual harm. That enormous leverage incentivizes firms like Higbee to send mass demand letters seeking quick settlements. Even meritless claims can generate revenue when recipients are too afraid, confused, or resource-constrained to fight back.

This hits community organizations, independent publishers, and small service providers that don’t have in-house legal teams especially hard. Faced with the threat of ruinous statutory damages, many just pay what is demanded.

That’s not how copyright law should work.

Know Your Rights

If you receive a copyright demand based on material someone else posted, don’t assume you’re liable.

You may have defenses based on:

  • Your role as a hosting or service provider
  • Lack of volitional conduct
  • Prompt removal of the material after notice
  • The statute of limitations
  • The copyright owner’s failure to timely register the work
  • The absence of actual damages

Every situation is different, but the key point is this: a demand letter is not the same as a valid legal claim.

Standing Up to Copyright Trolls

May First stood its ground, and Higbee abandoned its demand after we explained the law.

But the bigger problem remains. Copyright’s statutory damages framework enables aggressive enforcement tactics that targets the wrong parties, and chills lawful online activity.

Until lawmakers fix these structural incentives, organizations and individuals will keep facing pressure to pay up—even when they’ve done nothing wrong.

If you get one of these demand letters, remember: you may have more rights than it suggests.

Betty Gedlu

Print Blocking Won't Work - Permission to Print Part 2

1 week 1 day ago

This is the second post in a series on 3D print blocking, for the first entry check out: Print Blocking is Anti-Consumer - Permission to Print Part 1

Legislators across the U.S. are proposing laws to force “print blockers” on 3D printers sold in their states. This mandated censorware is doomed to fail for its intended purpose, but will still manage to hurt the professional and hobbyist communities relying on these tools.

3D printers are commonly used to repair belongings, decorate homes, print figurines, and so much more. It’s not just hobbyists; 3D printers are also used professionally for parts prototyping and fixturing, small-batch manufacturing, and workspace organization. In rare cases, they’ve also been used to print parts needed for firearm assembly.

Many states have already banned manufacturing firearms using computer controlled machine tools, which are called “Computer Numerical Control or CNC machines,” and 3D printers without a license. Recently proposed laws seek to impose technical limitations onto 3D printers (and in some cases, CNC machines) in the hope of enforcing this prohibition.

This is a terrible idea; these mandates will be onerous to implement and will lock printer users into vendor software, impose one-time and ongoing costs on both printer vendors and users, and lay the foundation for a 3D-print censorship platform to be used in other jurisdictions. We dive more into these issues in the first part of this series.

On a pragmatic level, however, these state mandates are just wishful thinking. Below, we dive into how 3D printing works, why these laws won’t deter the printing of firearms, and how regular lawful use will be caught in the proposed dragnet.

How 3D Printers Work

To understand the impact of this proposed legislation, we need to know a bit about how 3D printers work. The most common printers work similarly to a computer-controlled hot glue gun on a motion platform; they follow basic commands to maintain temperature, extrude (push) plastic through a nozzle, and move a platform. These motions together build up layers to make a final “print.” Modern 3D printers often offer more features like Wi-Fi connectivity or camera monitoring, but fundamentally they are very simple machines.

The basic instructions used by most 3D printers are called Geometric Code, or G-Code, which specify very basic motions such as “move from position A to position B while extruding plastic.” The list of commands that will eventually print up a part are transferred to the printer in a text file thousands-to-millions of lines long. The printer dutifully follows these instructions with no overall idea of what it is printing.

While it is possible to write G-Code by hand for either a CNC machine or a 3D printer, the vast majority is generated by computer aided manufacturing (CAM) software, often called a “slicer” in 3D printing since it divides a 3D model into many 2D slices then generates motion instructions. 

This same general process applies to CNC machines which use G-Code instructions to guide a metal removal tool. CNC machines have been included in previous prohibitions on firearm manufacturing and file distribution and are also targeted in some of these bills.

There are other types of 3D printers such as those that print concrete, resin, metal, chocolate and other materials using slightly different methods. All of these would be subject to the proposed requirements regardless of how unlikely doing harm with a gun made out of chocolate would be. 

Simple rectangular 3D model for test fit

Part of a 173490 line long G-Code file produced by slicer for simple rectangular model.

Part of a 173,490 line long G-Code file for a simple rectangular part.

How is Firearm Detection Supposed to Work?

Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software. These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale.

Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support. Owners of existing noncompliant 3D printers in regulated states will be unable to resell their printers on the secondary market legally.

What Will Actually Happen?

While the proposed laws allow for scanning to happen on either the printer itself or in the slicer software, the reality is more complicated. 

The computers inside many 3D printers have very limited computational and storage ability; it will be impossible for the printer’s computer to render the G-Code into a 3D model to compare with the database of prohibited files. Thus the only way to achieve this through the machine would be to upload all printer files to a cloud comparison tool, creating new delays, errors, and unacceptable invasions of privacy.

Many vendors will instead choose to permanently link their printers to a specific slicer that implements firearm detection. This requires cryptographic signing of G-Code to ensure only authorized prints are completed, and will lock 3D printer owners into the slicer chosen by their printer vendor.

Regardless of the specifics of their implementation, these algorithms will interfere with 3D printers' ability to print other parts without actually stopping manufacture of guns. It takes very little skill for a user to make slight design tweaks to either a model or G-Code to evade detection. One can also design incomplete or heavily adorned models which can be made functional with some post-print alterations. While this would be pioneered by skilled users—like the ones who designed today’s 3D printed guns—once the design and instructions are out there anyone able to print a gun today will be able to follow suit.  

Firearm part identification features also impose costs onto 3D printer manufacturers, and hence their end consumers. 3D printer manufacturers must develop or license these costly algorithms and continuously maintain and update both the algorithm and the database of firearm models. Older printers that cannot comply will not be able to be resold in states where they are banned, creating additional E-waste.

While those wishing to create guns will still be able to do so, people printing other functional parts will likely be caught up in these algorithms, particularly for things like film props, kids’ toys, or decorative models, which often closely resemble real firearms or firearm components.

What Are The Impacts of These Changes?

Technological restrictions on manufacturing tools’ abilities are harmful for many reasons. EFF is particularly concerned with this regulation locking a 3D printer to proprietary vendor software. Vendors will be able to use this mandate to support only in-house materials, locking users into future purchases. Vendor slicer software is often based on out-of-date, open source software, and forcing users to use that software deprives them of new features or even use of their printer altogether if the vendor goes out of business. At worst, some of these bill will make it a misdemeanor to fix those problems and gain full control of your printer.

File-scanning frameworks required by this regulation will lay the foundation for future privacy and freedom intrusions. This requirement could be co-opted to scan prints for copyright violations and be abused similar to DMCA takedowns, or to suppress models considered obscene by a patchwork of definitions. What if you were unable to print a repair part because the vendor asserted the model was in violation of their trademark? What if your print was considered obscene?

Regardless of your position on current prohibitions on firearms, we should all fight back against this effort to force technological restrictions on 3D printers, and legislators must similarly abandon the idea. These laws impose real costs and potential harms among lawful users, lay the groundwork for future censorship, and simply won’t deter firearm printing. 

Cliff Braun

Print Blocking is Anti-Consumer - Permission to Print Part 1

1 week 1 day ago

This is the first post in a series on 3D print blocking, for the next entry check out Print Blocking Won't Work - Permission to Print Part 2

When legislators give companies an excuse to write untouchable code, it’s a disaster for everyone. This time, 3D printers are being targeted across a growing number of states. Even if you’ve never used one, you’ve benefited from the open commons these devices have created—which is now under threat.

This isn’t the first time we’ve gone to bat for 3D printing. These devices come in many forms and can construct nearly any shape with a variety of materials. This has made them absolutely crucial for anything from life-saving medical equipment, to little Iron Man helmets for cats, to everyday repairs. For decades these devices have been a proven engine for innovation, while democratizing a sliver of manufacturing for hobbyists, artists, and researchers around the world.

For us all to continue benefiting from this grassroots creativity, we need to guard against the type of corporate centralization that has undermined so much of the promise of the digital era.  Unfortunately some state legislators are looking to repeat old mistakes by demanding printer vendors install an enshittification switch.

In the U.S, three states have recently proposed that commercial 3D-printer manufacturers must ensure their printers only work with their software, and are responsible for checking each print for forbidden shapes—for now, any shape vendors consider too gun-like. The 2D equivalent of these “print-blocking” algorithms would be demanding HP prevent you from printing any harmful messages or recipes. Worse still, some bills can introduce criminal penalties for anyone who bypasses this censorware, or for anyone simply reselling their old printer without these restrictions. 

If this sounds like Digital Rights Management (DRM) to you, you’ve been paying attention. This is exactly the sort of regulation that creates a headache and privacy risk for law-abiding users, is a gift for would-be monopolists, and can be totally bypassed by the lawbreakers actually being targeted by the proposals.

Ghosting Innovation

“Print blocking” is currently coming for an unpopular target: ghost guns. These are privately made firearms (PMFs) that are typically harder to trace and can bypass other gun regulations. Contrary to what the proposed regulations suggest, these guns are often not printed at home, but purchased online as mass-produced build-it-yourself kits and accessories.

Scaling production with consumer 3D printers  is expensive, error-prone, and relatively slow.  Successfully making a working firearm with just a printer still requires some technical know-how, even as 3D printers improve beyond some of these limitations. That said, many have concerns about unlicensed firearm production and sales. Which is exactly why these practices are already illegal in many states, including all of the states proposing print blocking. 

Mandating algorithmic print-blocking software on 3D printers and CNC machines is just wishful thinking. People illegally printing ghost guns and accessories today will have no qualms with undetectably breaking another law to bypass censoring algorithms. That’s if they even need to—the cat and mouse game of detecting gun-like prints might be doomed from the start, as we dive into in this companion post.

Meanwhile, the overwhelming majority of 3D-printer users do not print guns. Punishing innovators, researchers, and hobbyists because of a handful of outlaws is bad enough, but this proposal does it by also subjecting everyone to the anticompetitive and anticonsumer whims of device manufacturers.

Can’t make the DRM thing work

We’ve been railing against Digital Rights Management (DRM) since the DMCA made it a federal crime to bypass code restricting your use of copyrighted content. The DRM distinction has since been weaponized by manufacturers to gain greater leverage over their customers and enforce anti-competitive practices

The same enshittification playbook applies to algorithmic print blockers. 

Restricting devices to manufacturer-provided software is an old tactic from the DRM playbook, and is one that puts you in a precarious spot where you need to bend to the whims of the manufacturer.  Only Windows 11 supported? You need a new PC. Tools are cloud-based? You need a solid connection. The company shutters? You now own an expensive paperweight—which used to make paperweights.

It also means useful open source alternatives which fit your needs better than the main vendor’s tools are off the table. The 3D-printer community got a taste of this recently, as manufacturer Bambu Labs pushed out restrictive firmware updates complicating the use of open source software like OrcaSlicer. The community blowback forced some accommodations for these alternatives to remain viable. Under the worst of these laws, such accommodations, and other workarounds, would be outlawed with criminal penalties.

People are right to be worried about vendor lock-in, beyond needing the right tool for the job. Making you reliant on their service allows companies to gradually sour the deal. Sometimes this happens visibly, with rising subscription fees, new paywalls, or planned obsolescence. It can also be more covert, like collecting and selling more of your data, or cutting costs by neglecting security and bug fixes.

With expensive hardware on the line, they can get away with anything that won’t make you pay through the nose to switch brands.

Indirectly, this sort of print-blocking mandate is a gift to incumbent businesses making these printers. It raises the upfront and ongoing costs associated with smaller companies selling a 3D printer, including those producing new or specialized machines. The result is fewer and more generic options from a shrinking number of major incumbents for any customer not interested in building their own 3D printer.

Reaching the Melting Point

It’s already clear these bills will be bad for anyone who currently uses a 3D printer, and having alternative software criminalized is particularly devastating for open source contributors. These impacts to manufacturers and consumers culminate into a major blow to the entire ecosystem of innovation we have benefited from for decades. 

But this is just the beginning. 

Once the infrastructure for print blocking is in place, it can be broadened. This isn’t a block of a very specific and static design, like how some copiers block reproductions of currency. Banning a category of design based on its function is a moving target, requiring a constantly expanding blacklist. Nothing in this legislation restricts those updates to firearm-related designs. Rather, if we let proposals like this pass, we open the door to the database of forbidden shapes for other powerful interests.

Intellectual property is a clear expansion risk. This could look like Nintendo blocking a Pikachu toy, John Deere blocking a replacement part, or even patent trolls forcing the hand of hardware companies. Repressive regimes, here or abroad, could likewise block the printing of "extreme" and “obscene” symbols, or tools of resistance like popular anti-ICE community whistles

Finally, even the most sympathetic targets of algorithmic censorship will result in false positives—blocking 3D-printer users’ lawful expression. This is something proven again and again in online moderation. Whether by mistake or by design, a platform that has you locked in has little incentive to offer remedies to this censorship. And these new incentives for companies to surveil each print can also impose a substantial chilling effect on what the user chooses to create.

While 3D printers aren’t in most households, this form of regulation would set a dangerous precedent. Government mandating on-device censors which are maintained by corporate algorithms is bad. It won’t work. It consolidates corporate power. It criminalizes and blocks the grassroots innovation and empowerment which has defined the 3D-printer community. We need to roundly reject these onerous restraints on creation. 

Rory Mir

Google and Amazon: Acknowledged Risks, and Ignored Responsibilities

1 week 1 day ago

In late 2024, we urged Google and Amazon to honor their human rights commitments, to be more transparent with the public, and to take meaningful action to address the risks posed by Project Nimbus, their cloud computing contract that includes Israel’s Ministry of Defense and the Israeli Security Agency. Since then, a stream of additional reporting has reinforced that our concerns were well-founded. Yet despite mounting evidence of serious risk, both companies have refused to take action. 

Amazon has completely ignored our original and follow-up letters. Google, meanwhile, has repeatedly promised to respond to our questions. Yet more than a year and a half later, we have seen no meaningful action by either company. Neither approach is acceptable given the human rights commitments these companies have made.

Additionally, Microsoft required a public leak before it felt compelled enough to look into and find that its client, the Israeli government, was indeed misusing its services in ways that violated Microsoft’s public commitments to human rights. This should have given both Google and Amazon an additional reason to take a close look and let the public know what they find, but nothing of the sort materialized. 

In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.

Google: Known Risks, No Meaningful Action

Google’s own internal assessments warned of the risks associated with Project Nimbus even before the contract was signed. Major news outlets have reported that Google provides the Israeli government with advanced cloud and AI services under Project Nimbus, including large-scale data storage, image and video analysis, and AI model development tools. These capabilities are exceptionally powerful, highly adaptable, and well suited for surveillance and military applications.

Despite those warnings, and the multiple reports since then about human rights abuses by the very portions of the Israeli government that uses Google’s and Amazon’s services, the companies continue to operate business as usual. It seems that they have taken the position that they do not need to change course or even publicly explain themselves unless the media or other external organizations present definitive proof that their tools have been used in specific violations of international human rights or humanitarian law. While that conclusive public evidence has not yet emerged for all the companies, the risks are obvious, and they are aware of them. Instead of conducting robust, transparent human rights due diligence, Amazon and Google are continually choosing to look the other way.

Google’s own internal assessments undermine its public posture. According to reporting, Google’s lawyers and policy staff warned that Google Cloud services could be linked to the facilitation of human rights abuses. In the same report, Google employees also raised concerns that the company’s cloud and AI tools could be used for surveillance or other militarized purposes, which seems very likely given the Israeli government’s long-standing reliance on advanced data-driven systems to control and monitor Palestinians.

Google has publicly claimed that Project Nimbus is “not directed at highly sensitive, classified, or military workloads” and is governed by its standard Acceptable Use Policies. Yet reporting has revealed conflicting representations about the contract’s terms, including indications that the Israeli government may be permitted to use any services offered in Google’s cloud catalog for any purpose. Google has declined to publicly resolve these contradictions, and its lack of transparency is problematic. The gap between what Google says publicly and what it knows internally should alarm anyone who hopes to take the company’s human rights commitments seriously.

Google’s and Amazon’s AI Principles Require Proactive Action

Even after being revised last year, Google’s AI Principles continue to commit the company to responsible development and deployment of its technologies, including implementing appropriate human oversight, due diligence, and safeguards to mitigate harmful outcomes and align with widely accepted principles of international law and human rights. While the updated principles no longer explicitly commit Google to avoiding entire categories of harmful use, they still require the company to assess foreseeable risks, employ rigorous monitoring and mitigation measures, and act responsibly throughout the full lifecycle of AI development and deployment.

Amazon has similarly committed to responsible AI practices through its Responsible AI framework for AWS services. The company states that it aims to integrate responsible AI considerations across the full lifecycle of AI design, development and operation, emphasizing safeguards such as fairness, explainability, privacy and security, safety, transparency, and governance. Amazon also says its AI services are designed with mechanisms for monitoring, and risk mitigation to help prevent harmful outputs or misuse and to enable responsible deployment across a range of use cases.

Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice.

Here, the risks are neither speculative nor remote. They are foreseeable, well-documented, and exacerbated by the context in which Project Nimbus operates, which is an ongoing military campaign marked by widespread civilian harm and credible allegations of grave human rights violations including genocide. In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.

Modern cloud and AI systems are designed to be flexible, customizable, and deployable at scale, often beyond the vendor’s direct visibility. That reality is precisely why human rights due diligence must be proactive. Waiting for a leaked document or whistleblower account demonstrating direct misuse, as occurred in Microsoft’s case, means waiting until harm has already been done.

Microsoft’s Experience Should Have Been Warning Enough

As noted above, the recent revelations about Microsoft’s technologies being misused in violation of Microsoft’s commitments by the Israeli military illustrate the dangers of this wait-and-see approach. Google and Amazon should not need a similar incident to recognize what is at stake. The demonstrated misuse of comparable technologies, combined with Google’s and Amazon’s own knowledge of the risks associated with Project Nimbus, should already be sufficient to trigger action.

The appropriate response is to act responsibly and proactively.

Google and Amazon should immediately:

  • Conduct and publish an independent human rights impact assessment of Project Nimbus.
  • Disclose how they evaluate, monitor, and enforce compliance with their AI Principles in high-risk government contracts, including and especially in Project Nimbus.
  • Commit to suspending or restricting services where there is a credible risk of serious human rights harm, even if definitive proof of misuse has not yet emerged.
Waiting Is a Choice, and Not One That Protects Human Rights

Google and Amazon publicly emphasize their commitment to responsible AI and respect for human rights. Those commitments are meaningless if they apply only once harm is undeniable and irreversible. In conflict settings, especially where secrecy and information asymmetry are the norm, companies must act on credible risk, not perfect evidence.

Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice, and one that carries real consequences for people whose lives are already at risk.

Betty Gedlu

EFF’s Submission to the UN OHCHR on Protection of Human Rights Defenders in the Digital Age

1 week 1 day ago

Governments around the world are adopting new laws and policies aimed at addressing online harms, including laws intended to curb cybercrime and disinformation, and ostensibly protect user safety. While these efforts are often framed as necessary responses to legitimate concerns, they are increasingly being used in ways that restrict fundamental rights.

In a recent submission to the United Nations Office of the High Commissioner for Human Rights, we highlighted how these evolving regulatory approaches are affecting human rights defenders (HRDs) and the broader digital environment in which they operate.

Threats to Human Rights Defenders

Across multiple regions, cybercrime and national security laws are being applied to prosecute lawful expression, restrict access to information, and expand state surveillance. In some cases, these measures are implemented without adequate judicial oversight or clear safeguards, raising concerns about their compatibility with international human rights standards.

Regulatory developments in one jurisdiction are also influencing approaches elsewhere. The UK’s Online Safety Act, for example, has contributed to the global diffusion of “duty of care” frameworks. In other contexts, similar models have been adopted with fewer protections, including provisions that criminalize broadly defined categories of speech or require user identification, increasing risks for those engaged in the defense of human rights.

At the same time, disruptions to internet access—including shutdowns, throttling, and geo-blocking—continue to affect the ability of HRDs to communicate, document abuses, and access support networks. These measures can have significant implications not only for freedom of expression, but also for personal safety, particularly in situations of conflict or political unrest.

The expanded use of digital surveillance technologies further compounds these risks. Spyware and biometric monitoring systems have been deployed against activists and journalists, in some cases across national borders. These practices result in intimidation, detention, and other forms of retaliation.

The practices of social media platforms can also put human rights defenders—and their speech—at risk. Content moderation systems that rely on broadly defined policies, automated enforcement, and limited transparency can result in the removal or suppression of speech, including documentation of human rights violations. Inconsistent enforcement across languages and regions, as well as insufficient avenues for redress, disproportionately affects HRDs and marginalized communities.

Putting Human Rights First

These trends underscore the importance of ensuring that regulatory and corporate responses to online harms are grounded in human rights principles. This includes adopting clear and narrowly tailored legal frameworks, ensuring independent oversight, and providing effective safeguards for privacy, expression, and association.

It also requires meaningful engagement with civil society. Human rights defenders bring essential expertise on the local and contextual impacts of digital policies, and their participation is critical to developing effective and rights-respecting approaches.

As digital technologies continue to shape civic space, protecting the individuals and communities who rely on them to advance human rights remains an urgent priority.

You can read our full submission here.

Jillian C. York

Speaking Freely: Jacob Mchangama

1 week 1 day ago

Interviewer: Jillian York

Jacob Mchangama is a Danish lawyer, human-rights advocate, and public commentator. He is the founder and director of Justitia, a Copenhagen-based think tank focusing on human rights, freedom of speech, and the rule of law. His new book with Jeff Kosseff, The Future of Free Speech: Reversing the Global Decline of Democracy's Most Essential Freedom, comes out on April 7th.

Jillian York: Welcome, Jacob. I'm just going to kick off with a question that I ask everyone, which is: what does free speech mean to you?

Jacob Mchangama: I like to use the definition that Spinoza, the famous Dutch renegade philosopher, used. He said something along the lines, and I'm paraphrasing here, that free speech is the right of everyone to think what they want and say what they think, or the freedom to think what they want and say what they think. I think that's a pretty neat definition, even though it may not be fully exhaustive from sort of a legal perspective, I like that. 

JY: Excellent. I really like that. I'd like to know what personally shaped your views and also what brought you to doing this work for a living. 

JM: I was born in Copenhagen, Denmark, which is a very liberal, progressive, secular country. And for most of my youth and sort of young adulthood, I did not think much about free speech. It was like breathing the air. It was essentially a value that had already been won. This was up until sort of the mid-naughties. I think everyone was sort of surfing the wave of optimism about freedom and democracy at that time. 

And then Denmark became sort of the epicenter of a global battle of values over religion, the relationship between free speech and religion with the whole cartoon affair. And that's really what I think made me think deep and hard about that, that suddenly people were willing to respond to cartoonists using crayons with AK-47s and killings, but also that a lot of people within Denmark suddenly said, “Well, maybe free speech doesn't include the right to offend, and maybe you're punching down on a vulnerable minority,” which I found to be quite an unpersuasive argument for restricting free speech. 

But what's also interesting was that you saw sort of how positions on free speech shifted. So initially, people on the left were quite apprehensive about free speech because they perceived it to be about an attack on minorities, in this case, Muslim immigrants in Denmark. Then the center right government came into power in Denmark, and then the narrative quickly became, well, we need to restrict certain rights of hate preachers and others in order to defend freedom and democracy. And then suddenly, people on the right who had been free speech absolutists during the cartoon affair were willing to compromise on it, and people on the left who had been sort of, well, “maybe free speech has been taken too far” were suddenly adamant that this was going way too far, and unfortunately, that is very much with us to this day. It's difficult to find a principled, consistent constituency for free speech. 

JY: That's a great way of putting it. I feel like, with obvious differences from country to country, it feels like that kind of polarization is true everywhere, including the bit about flipping sides. I guess my next question, then, is: what do you feel like most people get wrong about free speech?

JM: I think there's a tendency—and I'm talking especially in the West, in the traditional free and open democracies—I think there's a huge tendency to take all the benefits of free speech for granted and focus myopically on the harms, real and perceived, of speech. I mean, just the fact that you and I can sit here, you know, I don't know where you are in the world, but you and I can have a direct, live, uncensored conversation…that is something that you know was unimaginable not that long ago, and we just take that for granted. We take it for granted that we can have access to all the information in the world that would previously have required someone to spend years in libraries, traveling the world, finding rare manuscripts.

We take it for granted, but this is the difference between us and say dissidents in Iran or Russia or Venezuela. We take it for granted that we can go online and vent against our governments and say things, and we can also vent things on social issues that might be deeply offensive to other people, but generally we don't face the risk of being imprisoned or tortured. But that's just not the case in many other countries. 

So, I think those benefits, and also, I would say, when you look at the historical angle, every persecuted or discriminated against group that has sought and achieved a higher degree of equal dignity, equal protection under the law, has relied on speech. First they relied on speech, then they could rely on free speech at some point, but initially they didn't have free speech right? So whether it's abolitionist the civil rights movement in the United States, you know my good friend Jonathan Rauch, who was sort of at the forefront of of securing same sex marriage in the United States, knows that was a fight that very much relied on speech. And women's rights…fierce women, who would protest outside the White House and burn in effigy figures of the President, would go to prison. Women didn't have political power. They didn't have guns. They didn't have economic power, they had speech, and that's what you need, to petition the government, to shine a light on abuse, to rally other allies and so on. And I think unfortunately, we've unlearned those hugely important precedents for why we have free speech today. 

JY: I’m definitely going to come back to that. But first I want to ask you about the new book you have coming out with Jeff Kosseff, The Future of Free Speech: Reversing the Global Decline of Democracy's Most Essential Freedom. I'm very excited, I’ve pre-ordered it. 

So, in light of that, I’ve got a two part question: First, what are some of the trends that concern you the most about what’s going on today? And then, what do you think we need to do to ensure that there is a future for free speech?

JM: So first of all, I was thrilled to be able to write it with Jeff, because Jeff is such an authority on First Amendment section 230 issues. But from the personal perspective, you could say that this book sort of continues where my previous book on the history of free speech finishes.

And so, based on the idea that we are living through a free speech recession that has become particularly acute in this digital age, where we see what I term as various waves of elite panic that lead to attempts to impose sort of top down controls on online speech in particular—and this is not only in the countries where you'd expect it, like China and Russia and Iran, but increasingly also in open democracies that used to be the heartland of free speech—there's a tendency, I think, in democracies, to view free speech no longer as sort of a competitive advantage against authoritarian states, or a right that would undermine authoritarians, but as sort of a Trojan horse which allows the enemies of democracies, both at home and abroad, to weaponize free speech against democracy, and so that's why the overwhelming

legislative initiatives and framing of free speech is often “this is a danger.” This is something we need to do something about. We need to do something about disinformation. We need to do something about hate speech. We need to do something about extremism. We need to do something about, you know, we need to have child safety laws. We need age verification. And you know, you know the list all too well. 

JY: I do, absolutely.

JM: Where I think where free speech advocates often fall short, is that we're very good at sort of talking about the slippery slope and John Stuart Mill and all these things, and that's important, but very often we don't have compelling proposals to sell to people who are not sort of civil libertarians at heart, and who are generally in favor of free speech, but who are frightened about particular developments at particular manifestations of speech that they think have become so dangerous to you know, freedom, democracy, whatever interest that they're willing to compromise free speech. 

And so we try to point to some concrete examples of—giving life to the old cliché—fighting bad speech with better speech. So some of those examples are counter speech. There are some great examples. One of them is from Brazil, where there was a black weather woman who was the first black weather woman to be sort of on a prominent TV channel, and she was met with brutal racism. So, you know, what should have been a happy moment for her became quite devastating. And so there was this NGO that printed billboards of these very nasty racist comments, blurred the identity of the user who had said it, but then put them in the neighborhoods where these people lived. So that was a very powerful way to confront Brazilians with the fact that, you know, racism is alive. It's right here in your neighborhood. And you know they used the N word and everything, and nothing was censored in terms of this racism, which was put right in front of it of everyone, and it actually led to a lot of people sort of deleting their comments and someone apologizing, and led to, I think, a fruitful debate in Brazilian society. 

Then you have other types of counter speech. One of them is a Swedish journalist called Mina Dennert. She started the “I am here” movement. So it's a counter speech movement, which I think spans 150,000 volunteers across 15 countries. And they use counter speech online, typically on Meta platforms, I think, where they essentially gather together and push back against hate speech, not necessarily to convince the speaker that they're wrong, but to give support to those who are the victims, but also to essentially convince what is often termed the movable middle, to show them that there are people who disagree with racist hate speech, and there's actually empirical data to suggest that these can be effective strategies. You can also use humor. 

Daryl Davis is a very extreme example. He's a black jazz musician who has made it his life mission to befriend members of the KKK. And he has converted around 200 members of the KKK, to essentially leave it and he does that by just having a conversation. Because if your worldview is that blacks are inferior and should not enjoy equal rights, and you have a conversation with someone in a way where it becomes impossible for you to uphold that worldview, because the person in front of you is clearly someone who's intelligent, articulate, who can counter all your your preconceived notions, then it becomes very difficult to uphold that worldview right? And you can imagine that those members who leave the KKK then become agents of change within their former communities. 

So there are various counter speech strategies that have shown a promise, and at the Future of Free Speech [think tank] that I direct, we've developed these toolkits, and we do teachings around the world, I think we've translated them into nine or ten languages. So it's not a panacea, obviously, to everything that's going on, but it's something quite practical, I think. And the good thing about it is also that it doesn't depend on an official definition of hate speech. If you're concerned about a particular type of speech, you can use counter speech to counter it. But you're not engaging in censorship, and we don't have to agree on what the definition of hate speech is. In that way, it’s hopefully an empowering tool. 

And another example: we talk about how Taiwan has been quite an inspiring case for using crowd sourced fact checking, for using sort of a bottom up approach to fighting disinformation from China, but also around Covid, so zero lockdowns and no centralized censorship, and they’re doing better than a lot of Western democracies that use more illiberal methods and the crowd sourced fact checking pioneered in Taiwan is what inspired Bird Watch on Twitter prior to its being taking over by Elon Musk, and which is now community notes on X, which I actually think for all the things you might dislike about X, is a feature that is quite promising. 

JY: Definitely.  I absolutely agree with that, and I'm really glad you mentioned your previous book, which I loved, and the idea of a free speech recession. 

You’ve done so much of this work all over the world, and have learned from people in different places and tried to understand the challenges they’re facing in terms of free speech. We actually started this project, Speaking Freely, primarily to share those different perspectives and to bring them to our readership, the majority of which comes from the U.S. What I’d like to ask you, then, is what do you feel that we in the “West” or in more open societies have to learn from free speech activists in the rest of the world?

JM: Just…the bravery of say, Iranians who now face complete—and this was even before the attacks by the US and Israel—complete internet bans. But who have also relied on social media platforms and digital creativity to circumvent official propaganda and censorship. I think those types of societies provide sort of a real time experiment, right? You know, okay, we have we have social media, and it's messy, and sometimes it's ugly, and sometimes some of these tech companies do things that we disapprove of, but you know the cure in terms of further government control, for instance, let's say, getting rid of section 230, adding age verification laws, trying to create exceptions to the First Amendment in cyberspace…we have societies where that is happening, albeit, of course, at a very extreme scale. But would you really trade the freedoms, however messy they are, for that kind of society? 

And then, I also worry a lot about the state of affairs in Europe, where I'm from, where it's not unusual if you're in Germany, to have the police show up at your door if you've insulted a powerful politician. For the book, I interviewed an Israeli, Jewish woman who lives in Berlin. She's on the far left and very opposed to to Israel's policies, and she's been arrested four times for for protesting with a plaque that says, “as an Israeli Jew, stop the genocide in Gaza.” And again, you can agree or disagree whether there's a genocide, but that's just political speech. Yet the optics of a Jew—an Israeli, Jewish woman—being arrested by German police in Berlin in the name of fighting antisemitism is, I think, absurd, right? 

JY: I’m laughing only because I think I’ve said that exact sentence in an interview with the German press.

JM: But this is the reality right now. And I think it's also a good example of the fact that there have been people on the left in Europe who have said, well, we need to do something about the far right. And therefore it's okay to crack down, you know, use hate speech laws and so on. And then October 7 happened, and suddenly you see a lot of minorities and people on the left who are becoming the targets of laws against hate speech or glorification of terrorism and so on and so forth. And I think that's a powerful case for why you want a pretty hard nosed principle of consistent protection of free speech, also online. And, given the priorities of the current administration in the United States, I think that if the First Amendment and section 230 were not in place in the United States, the kind of laws that you have in Europe would be very moldable for the current administration to go after. I mean, it’s already going after its enemies, real and perceived, but it often loses in court exactly because of constitutional protections, including the First Amendment. But if that protection wasn't there, they would be much more successful, I think, in going after speech that they don't like.

JY: That’s such a fantastic answer, and I’m in total agreement. I was actually living in Berlin until quite recently and saw quite a bit of that firsthand. It’s really troubling. 

I want to shift course for a moment. We hopefully have some young people reading this as well, and I think right now in this moment where age verification proposals are happening everywhere—which we at EFF are really concerned about—it’s important to speak to them as well. What advice would you give to young readers who are coming of age around the topic of free speech and who are interested in doing this sort of work?

JM: I think young people are obviously immersed in the digital age, and some of them may never have opened a physical book. I don't know. Maybe it's a Boomer prejudice when I say that, but, but, I don't think it's a stretch to imagine that the vast majority of speech and expression that they're confronted with is through devices of a sort. I think it's crucial to understand that, you know, the system of free speech was developed before that, and so not to focus solely on thinking about free speech only through the lens of the digital age. What came before it is really important to give you some perspective.

So that’s one thing, but I also have two kids, aged 13 and 16, so I’ve thought a lot and fought a lot about some of these issues. I understand where some of the age verification concerns come from. I have parental controls on my children's phones and devices, and try to control it as best as possible, because I do think there can be harms if you spend too much time. But on the other hand, I would also say—and this goes back to the harms and benefits—sometimes there's this analogy that people want to make that social media is like tobacco, which I think is such a poor comparison, because, you know, no one in the world would disagree that tobacco is extremely harmful, right? It's cancerous and all kinds of other things. There are no benefits to tobacco, but social media access, I think, is very different. For instance, I moved to the United States with my family three years ago. My children had no problem speaking English, doing well in school because of YouTube. They could speak almost with the accent, they were immersed into cultural idioms, and they could learn stuff. And also in terms of connections, they have friends back home, it would be very difficult for them to stay in touch the same way that they can now and have connections, if it wasn't due to technology. And so I think that social media for minors also has benefits that make it very, very different from the tobacco analogy. 

Plus, I also think, and here I'm pointing my finger at Jonathan Haidt, that some of the evidence that is being pushed for these kinds of bans seem not to reflect scientific consensus, and that there's a lot of subject matter experts who actually think that the case is much more muddled than than the message that he has pushed in his best selling book, but which is now going the rounds. 

But it amazed me to look at. First of all, let me say I've admired Jonathan Haidt for a long time. I loved his previous work, but I just feel like his crusade on social media for minors and age verification is…in a certain sense, he's gone down some of the roads that he warned against in some of his previous books, in terms of motivated reasoning and confirmation bias and so on. But I saw Jonathan Haidt praise the Minister of Digital Affairs for Indonesia for their age verification bill that is supposed to come into effect now. Indonesia is a country that right now, I think, has a bill in place that will give further powers to the government to ban LGBT content, and what’s the justification? Protecting children. It is a country where someone uploaded a Tiktok video where they said an Islamic prayer before eating pork…two years in prison, right? So it's a country that is in the lower half of Freedom House's Freedom on the Net rankings. So it's amazing to me that a good liberal Democrat like Jonathan Haidt would essentially lend his legitimacy to a country like Indonesia when no one, no serious person, can be in doubt that these kinds of laws will be used and abused by a country like Indonesia to crack down on religious and political, sexual minorities and dissent in general.

JY: Absolutely. And that actually fits really well with something that I've been thinking a lot about too. I know you've written a lot about the Brussels effect and I'm trying to look at the ways in which a similar effect—not necessarily coming from Brussels, of course—is shaping internet regulation in different directions, in terms of laws influencing other laws.

Now, in terms of laws influencing other laws, age verification is, I think, one of the big ones. I mean, seeing these laws modeled after things that the UK or Australia or the U.S. has proposed, and then, just being made so much worse, and then sometimes echoing back here as well. And I think Indonesia is such a great example of that.

JM: Yeah. I mean, Australia sort of opened the Pandora’s box, and everyone is rushing in now, and I think the consequences are likely to be grave, and I think it fits into another issue which I think is even more concerning, that is this rehabilitation or of the concept of digital sovereignty. If you went back 10 years ago and talked about digital sovereignty, you would say, “Well, this is something that they do in China or Russia,” but now digital sovereignty is shouted from the rooftops in Brussels and democracies. 

And you know, I could maybe understand, if digital sovereignty meant, yes, we're going to protect our critical infrastructure, or we don't want to be overly reliant on American tech platforms, given the Trump administration's hostility towards Europe. But digital sovereignty now essentially means a concept of sovereignty which asserts that governments and institutions like the European Union have powers to determine what types of information and ideas their citizens should be confronted with. Now look up Article 19 in the Universal Declaration of Human Rights, what does it say? Everyone has the right to free expression, which includes, and I'm paraphrasing here, the right to share and impart ideas across frontiers, regardless of media, right? You know this. So now we're reverting back to an idea of free expression, which says that the government can now control what type of information that…if a foreign government or information that purports to undermine democratic values in a society, then the government has a right to censor it or require that an intermediary take mitigating steps towards it. I mean, I think that is really a recipe for disaster.

JY: I’m so glad you talked about that. I don’t even think everyone talking about digital sovereignty is working with the same definition. 

JM: No no, digital sovereignty can mean a lot of things. But there’s no doubt that it’s now being stretched to also include pure information and ideas rather than critical infrastructure or industrial policy where it may have a more benign role to play.

JY: Absolutely. Well, we’ve covered a lot of territory, so I’m going to ask you my favorite question, the one we ask everyone: Who is your free speech hero?

JM: I think my free speech hero would be Frederick Douglass. To me, he’s just someone who epitomizes not only being a principled defender of free speech, but someone who did free speech in practice. In his autobiography—he wrote three, I think—but in one of them there’s a foreword by the great abolitionist William Lloyd Garrison, and he describes watching and listening to Frederick Douglass give one of his first public speeches in Nantucket in 1841 and Garrison describes the impact that Douglass had on this crowd and he says something along the lines of: “I think I never hated slavery so much as in that very moment.” So you can almost feel the impact of Douglass’s speech, and that’s the gold standard, right, for what speech can do and why it should be free.

JY: Such a great answer. Thank you.

JM: Thank you.




Jillian C. York
Checked
2 hours 37 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed