Site Blocking Laws Will Always Be a Bad Idea: 2025 in Review

8 hours 43 minutes ago

This year, we fought back against the return of a terrible idea that hasn’t improved with age: site blocking laws. 

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved.  

But we’ve never believed they were gone for good. The major media and entertainment companies that backed site blocking in the US in 2012 turned to pushing for site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. And sure enough, the Motion Picture Association (MPA) and its allies have asked Congress to try again. 

There were no less than three Congressional drafts of site-blocking legislation. Representative Zoe Lofgren kicked off the year with the Foreign Anti-Digital Piracy Act (FADPA). Fellow House of Representatives member Darrell Issa also claimed to be working on a bill that would make it offensively easy for a studio to block your access to a website based solely on the belief that there is infringement happening. Not to be left out, the Senate Judiciary Committee produced the terribly named Block BEARD Act.  

None of these three attempts to fundamentally alter the way you experience the internet moved too far after their press releases. But the number tells us that there is, once again, an appetite among major media conglomerates and politicians to resurrect SOPA/PIPA from the dead.  

None of these proposals fixes the flaws of SOPA/PIPA, and none ever could. Site blocking is a flawed idea and a disaster for free expression that no amount of rewriting will fix. There is no way to create a fast lane for removing your access to a website that is not a major threat to the open web. Just as we opposed SOPA/PIPA over ten years ago, we oppose these efforts.  

Site blocking bills seek to build a new infrastructure of censorship into the heart of the internet. They would enable court orders directed to the organizations that make the internet work, like internet service providers, domain name resolvers, and reverse proxy services, compelling them to help block US internet users from visiting websites accused of copyright infringement. The technical means haven’t changed much since 2012. - tThey involve blocking Internet Protocol addresses or domain names of websites. These methods are blunt—sledgehammers rather than scalpels. Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in Austria, Italy, South Korea, France, and in the US, to name just a few.  

Given this downside, one would think the benefits of copyright enforcement from these bills ought to be significant. But site blocking is trivially easy to evade. Determined site owners can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online.  

The limits that lawmakers have proposed to put on these laws are an illusion. While ostensibly aimed at “foreign” websites, they sweep in any website that doesn’t conspicuously display a US origin, putting anonymity at risk. And despite the rhetoric of MPA and others that new laws would be used only by responsible companies against the largest criminal syndicates, laws don’t work that way. Massive new censorship powers invite abuse by opportunists large and small, and the costs to the economy, security, and free expression are widely borne. 

It’s time for Big Media and its friends in Congress to drop this flawed idea. But as long as they keep bringing it up, we’ll keep on rallying internet users of all stripes to fight it. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Mitch Stoltz

EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in Review

11 hours 26 minutes ago

Throughout 2025, EFF conducted groundbreaking investigations into Flock Safety's automated license plate reader (ALPR) network, revealing a system designed to enable mass surveillance and susceptible to grave abuses. Our research sparked state and federal investigations, drove landmark litigation, and exposed dangerous expansion into always-listening voice detection technology. We documented how Flock's surveillance infrastructure allowed law enforcement to track protesters exercising their First Amendment rights, target Romani people with discriminatory searches, and surveil women seeking reproductive healthcare.

Flock Enables Surveillance of Protesters

When we obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025, the patterns were unmistakable. Agencies logged hundreds of searches related to political demonstrations—the 50501 protests in February, Hands Off protests in April, and No Kings protests in June and October. Nineteen agencies conducted dozens of searches specifically tied to No Kings protests alone. Sometimes searches explicitly referenced protest activity; other times, agencies used vague terminology to obscure surveillance of constitutionally protected speech.

The surveillance extended beyond mass demonstrations. Three agencies used Flock's system to target activists from Direct Action Everywhere, an animal-rights organization using civil disobedience to expose factory farm conditions. Delaware State Police queried the Flock network nine times in March 2025 related to Direct Action Everywhere actions—showing how ALPR surveillance targets groups engaged in activism challenging powerful industries.

Biased Policing and Discriminatory Searches

Our November analysis revealed deeply troubling patterns: more than 80 law enforcement agencies used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety ALPR network. Between June 2024 and October 2025, police performed hundreds of searches using terms such as "roma" and racial slurs—often without mentioning any suspected crime.

Audit logs revealed searches including "roma traveler," "possible g*psy," and "g*psy ruse." Grand Prairie Police Department in Texas searched for the slur six times while using Flock's "Convoy" feature, which identifies vehicles traveling together—essentially targeting an entire traveling community without specifying any crime. According to a 2020 Harvard University survey, four out of 10 Romani Americans reported being subjected to racial profiling by police. Flock's system makes such discrimination faster and easier to execute at scale.

Weaponizing Surveillance Against Reproductive Rights

In October, we obtained documents showing that Texas deputies queried Flock Safety's surveillance data in what police characterized as a missing person investigation, but was actually an abortion case. Deputies initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman's self-managed abortion, and consulted prosecutors about possible charges.

A Johnson County official ran two searches with the note "had an abortion, search for female." The second search probed 6,809 networks, accessing 83,345 cameras across nearly the entire country. This case revealed Flock's fundamental danger: a single query accesses more than 83,000 cameras spanning almost the entire nation, with minimal oversight and maximum potential for abuse—particularly when weaponized against people seeking reproductive healthcare.

Feature Updates Miss the Point

In June, EFF explained why Flock Safety's announced feature updates cannot make ALPRs safe. The company promised privacy-enhancing features like geofencing and retention limits in response to public pressure. But these tweaks don't address the core problem: Flock's business model depends on building a nationwide, interconnected surveillance network that creates risks no software update can eliminate. Our 2025 investigations proved that abuses stem from the architecture itself, not just how individual agencies use the technology.

Accountability and Community Action

EFF's work sparked significant accountability measures. U.S. Rep. Raja Krishnamoorthi and Rep. Robert Garcia launched a formal investigation into Flock's role in "enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans."

Illinois Secretary of State Alexi Giannoulias launched an audit after EFF research showed Flock allowed U.S. Customs and Border Protection to access Illinois data in violation of state privacy laws. In November, EFF partnered with the ACLU of Northern California to file a lawsuit against San Jose and its police department, challenging warrantless searches of millions of ALPR records. Between June 5, 2024 and June 17, 2025, SJPD and other California law enforcement agencies searched San Jose's database 3,965,519 times—a staggering figure illustrating the vast scope of warrantless surveillance enabled by Flock's infrastructure.

Our investigations also fueled municipal resistance to Flock Safety. Communities from Austin to Evanston to Eugene successfully canceled or refused to renew their Flock contracts after organizing campaigns centered on our research documenting discriminatory policing, immigration enforcement, threats to reproductive rights, and chilling effects on protest. These victories demonstrate that communities—armed with evidence of Flock's harms—can challenge and reject surveillance infrastructure that threatens civil liberties.

Dangerous New Capabilities: Always-Listening Microphones

In October 2025, Flock announced plans to expand its gunshot detection microphones to listen for "human distress" including screaming. This dangerous expansion transforms audio sensors into powerful surveillance tools monitoring human voices on city streets. High-powered microphones above densely populated areas raise serious questions about wiretapping laws, false alerts, and potential for dangerous police responses to non-emergencies. After EFF exposed this feature, Flock quietly amended its marketing materials to remove explicit references to "screaming"—replacing them with vaguer language about "distress" detection—while continuing to develop and deploy the technology.

Looking Forward

Flock Safety's surveillance infrastructure is not a neutral public safety tool. It's a system that enables and amplifies racist policing, threatens reproductive rights, and chills constitutionally protected speech. Our 2025 investigations proved it beyond doubt. As we head into 2026, EFF will continue exposing these abuses, supporting communities fighting back, and litigating for the constitutional protections that surveillance technology has stripped away.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Sarah Hamid

Fighting Renewed Attempts to Make ISPs Copyright Cops: 2025 in Review

12 hours 50 minutes ago

You might not know it, given the many headlines focused on new questions about copyright and Generative AI, but the year’s biggest copyright case concerned an old-for-the-internet question: do ISPs have to be copyright cops? After years of litigation, that question is now squarely before the Supreme Court. And if the Supreme Court doesn’t reverse a lower court’s ruling, ISPs could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life.

The Stakes: Turning ISPs into Copyright Police

This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service. The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). Under that standard, providing a service that could be used for infringement is enough to create liability when a customer infringes.

The Fourth Circuit’s rule would have devastating consequences for the public. Given copyright law’s draconian penalties, ISP would be under enormous pressure to terminate accounts whenever they get an infringement notice, whether or not the actual accountholder has infringed anything: entire households, schools, libraries, or businesses that share an internet connection. These would include:

  • Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.
  • Universities, hospitals, and local governments could see internet access for whole communities disrupted.
  • Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.

And with more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect.

EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an amicus brief urging the Court to reverse the Fourth Circuit’s decision, taking guidance from patent law. In the Patent Act, where Congress has explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is incapable of substantial non-infringing use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework.

The Supreme Court held a hearing in the case on December 1, and a majority of the justices seemed troubled by the implications of the Fourth Circuit’s ruling. One exchange was particularly telling: asked what should happen when the notices of infringement target a university account upon which thousands of people rely, Sony’s counsel suggested the university could resolve the issue by essentially slowing internet speeds so infringement might be less appealing. It’s hard to imagine the university community would agree that research, teaching, artmaking, library services, and the myriad other activities that rely on internet access should be throttled because of the actions of a few students. Hopefully the Supreme Court won’t either.

We expect a ruling in the case in the next few months. Fingers crossed that the Court rejects the Fourth Circuit’s draconian rule.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Corynne McSherry

Operations Security (OPSEC) Trainings: 2025 in Review

1 day 13 hours ago

It's no secret that digital surveillance and other tech-enabled oppressions are acute dangers for liberation movement workers. The rising tides of tech-fueled authoritarianism and hyper-surveillance are universal themes across the various threat models we consider. EFF's Surveillance Self-Defense project is a vital antidote to these threats, but it's not all we do to help others address these concerns. Our team often receives questions, requests for security trainings, presentations on our research, and asks for general OPSEC (operations security, or, the process of applying digital privacy and information security strategies to a current workflow or process) advising. This year stood out for the sheer number and urgency of requests we fielded. 

Combining efforts across our Public Interest Technology and Activism teams, we consulted with an estimated 66 groups and organizations, with at least 2000 participants attending those sessions. These engagements typically look like OPSEC advising and training, usually merging aspects of threat modeling, cybersecurity 101, secure communications practices, doxxing self-defense, and more. The groups we work with are often focused on issue-spaces that are particularly embattled at the current moment, such as abortion access, advocacy for transgender rights, and climate justice. 

Our ability to offer realistic and community-focused OPSEC advice for these liberation movement workers is something we take great pride in. These groups are often under-resourced and unable to afford typical infosec consulting. Even if they could, traditional information security firms are designed to protect corporate infrastructure, not grassroots activism. Offering this assistance also allows us to stress-test the advice given in the aforementioned Surveillance Self-Defense project with real-world experience and update it when necessary. What we learn from these sessions also informs our blog posts, such as this piece on strategies for overcoming tech-enabled violence for transgender people, and this one surveying the landscape of digital threats in the abortion access movement post-Roe

There is still much to be done. Maintaining effective privacy and security within one's work is an ongoing process. We are grateful to be included in the OPSEC process planning for so many other human-rights defenders and activists, and we look forward to continuing this work in the coming years. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Daly Barnett

EFF in the Press: 2025 in Review

1 day 13 hours ago

EFF’s attorneys, activists, and technologists don’t just do the hard, endless work of defending our digital civil liberties — they also spend a lot of time and effort explaining that work to the public via media interviews. 

EFF had thousands of media mentions in 2025, from the smallest hyperlocal outlets to international news behemoths. Our work on street-level surveillance — the technology that police use to spy on our communities — generated a great deal of press attention, particularly regarding automated license plate readers (ALPRs). But we also got a lot of ink and airtime for our three lawsuits against the federal government: one challenging the U.S. Office of Personnel Management's illegal data sharing, a second challenging the State Department's unconstitutional "catch and revoke" program, and the third demanding that the departments of State and Justice reveal what pressure they put on app stores to remove ICE-tracking apps.

Other hot media topics included how travelers can protect themselves against searches of their devices, how protestors can protect themselves from surveillance, and the misguided age-verification laws that are proliferating across the nation and around the world, which are an attack on privacy and free expression.

On national television, Matthew Guariglia spoke with NBC Nightly News to discuss how more and more police agencies are using private doorbell cameras to surveil neighborhoods. Tori Noble spoke with ABC’s Good Morning America about the dangers of digital price tags, as well as with ABC News Live Prime about privacy concerns over OpenAI’s new web browser.

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FUrFD-JVHmp4%3Fsi%3DuaW-nxW8o3jt5WJU%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F1hEgPLRmgxo%3Fsi%3DwQsxQIqqjUnSI9qm%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

 

In a sampling of mainstream national media, EFF was cited 33 times by the Washington Post, 16 times by CNN, 13 times by USA Today, 12 times by the Associated Press, 11 times by NBC News, 11 times by the New York Times, 10 times by Reuters, and eight times by National Public Radio. Among tech and legal media, EFF was cited 74 times by Privacy Daily, 35 times by The Verge, 32 times by 404 Media, 32 times by The Register, 26 times by Ars Technica, 25 times by WIRED, 21 times by Law360, 21 times by TechCrunch, 20 times by Gizmodo, and 14 times by Bloomberg Law.

Abroad, EFF was cited in coverage by media outlets in nations including Australia, Bangladesh, Belgium, Canada, Colombia, El Salvador, France, Germany, India, Ireland, New Zealand, Palestine, the Philippines, Slovakia, South Africa, Spain, Trinidad and Tobago, the United Arab Emirates, and the United Kingdom. 

EFF staffers spoke to the masses in their own words via op-eds such as: 

And we ruled the airwaves on podcasts including: 

We're grateful to all the intrepid journalists who keep doing the hard work of reporting accurately on tech and privacy policy, and we encourage them to keep reaching out to us at press@eff.org.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Josh Richman

Drone as First Responder Programs: 2025 in Review

1 day 13 hours ago

Drone as first responder (DFR) adoption really took off in 2025. Though the concept has been around since 2018, this year saw more normalization of the technology, its integration into more real-time crime center structures, and the implementation of automated deployment of drones.

A DFR program features a fleet of camera-equipped drones, which can range from just a couple to dozens or more. These are deployed from a launch pad in response to 911 calls and other calls for service, sometimes operated by a drone pilot or, increasingly, autonomously directed to the call location. The appeal is the promise of increased “situational awareness” for officers headed to a call. This video offers a short explanation of DFR, and for a list of all of the cities we know use drones, including DFR programs, check out EFF’s Atlas of Surveillance

Major Moves from the FAA and Forthcoming Federal Issues

In order to deploy a drone beyond where it can be seen, operators need to receive a waiver from the Federal Aviation Administration (FAA), and all DFR programs require this. Police departments and technology vendors have complained that the process takes too long, and in May, FAA finalized reworked requirements, leading to a flood of waiver requests. An FAA spokesperson reported that in the first two months of the new waiver process, it had approved 410 such waivers, already accounting for almost a third of the approximately 1,400 DFR waivers that had ever been granted.

The federal government made other major moves on the drone front this year. A month after the new waivers went to effect, President Trump issued an Executive Order with aspirations for advancing the country’s drone industry. And at the end of the year, one of the largest drone manufacturers in the world and one of the biggest purveyors of law enforcement drones, DJI, will be banned from launching new products in the U.S. unless the federal government conducts a security audit that was mandated by the National Defense Authorization Act. However, at the moment, it doesn’t seem like that audit will happen, and if it doesn’t, it won’t be surprising to see other drone manufacturers leveraging the ban to boost their own products. 

Automated Drone Deployment and Tech Integrations

Early iterations of drone use required a human operator, but this year, police drone companies began releasing automated flying machines that don’t require much human intervention at all. New models can rely on AI and automated directions to launch and direct a drone. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

This was the year we saw DFR integrated with other tools and tech companies teamed up to bring even more powerful surveillance. Flock Safety added automated license plate readers (ALPR) to their drones. Axon and Skydio built on the partnership they launched in 2024. Drone manufacturer Brinc teamed up with Motorola Solutions on a DFR program. Drone company Paladin teamed up with a company called SkyeBrowse to add 3-D mapping of the environment to their list of features. 

DFR also is increasingly part of the police plans for real-time crime centers, meaning that the footage being captured by these flying cameras is being integrated into other streams and analyzed in ways that we’re still learning about. 

Transparency Around DFR Deployments

Transparency around adoption, use, and oversight is always crucial, particularly when it comes to police surveillance, and EFF has been tracking the growth of DFR programs across the country. We encourage you to use your local public records laws to investigate them further. Examples of the kinds of requests and the responsive documents people have already received — including flight logs, policies, and other information — can be found on MuckRock

The Problem with Drones

Flying cameras are bad enough. They can see and record footage from a special vantage point, capturing video of your home, your backyard, and your movements that should require clear policies around retention, audits, and use, including when the cameras shouldn’t be recording. We’re also seeing that additional camera analysis and other physical features that can be added (so-called “payloads”) — like thermal cameras and even tear gas — can make drones even more powerful and that police technology companies are encouraging DFR as part of surveillance packages.

It's important that next year we all advocate for, and enforce, standards in adopting and using these DFRs. Check the Atlas to see if they are used where you live and learn more about drones and other surveillance tools on EFF’s Street-Level Surveillance Hub.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Beryl Lipton

EFFector Audio Speaks Up for Our Rights: 2025 Year in Review

2 days 7 hours ago

This year, you may have heard EFF sounding off about our civil liberties on NPR, BBC Radio, or any number of podcasts. But we also started sharing our voices directly with listeners in 2025. In June, we revamped EFFector, our long-running electronic newsletter, and launched a new audio edition to accompany it.

Providing a recap of the week's most important digital rights news, EFFector's audio companion features exclusive interviews where EFF's lawyers, activists, and technologists can dig deeper into the biggest stories in privacy, free speech, and innovation. Here are just some of the best interviews from EFFector Audio in 2025.

Unpacking a Social Media Spying Scheme

Earlier this year, the Trump administration launched a sprawling surveillance program to spy on the social media activity of millions of noncitizens—and punish those who express views it doesn't like. This fall, EFF's Lisa Femia came onto EFFector Audio to explain how this scheme works, its impact on free speech, and, importantly, why EFF is suing to stop it.

"We think all of this is coming together as a way to chill people's speech and make it so they do not feel comfortable expressing core political viewpoints protected by the First Amendment," Femia said.


Challenging the Mass Surveillance of Drivers

But Lisa was hardly the only guest talking about surveillance. In November, EFF's Andrew Crocker spoke to EFFector about Automated License Plate Readers (ALPRs), a particularly invasive and widespread form of surveillance. ALPR camera networks take pictures of every passing vehicle and upload the location information of millions of drivers into central databases. Police can then search these databases—typically without any judicial approval—to instantly reconstruct driver movements over weeks, months, or even years at a time.

"It really is going to be a very detailed picture of your habits over the course of a long period of time," said Crocker, explaining how ALPR location data can reveal where you work, worship, and many other intimate details about your life. Crocker also talked about a new lawsuit, filed by two nonprofits represented by EFF and the ACLU of Northern California, challenging the city of San Jose's use of ALPR searches without a warrant.

Similarly, EFF's Mario Trujillo joined EFFector in early November to discuss the legal issues and mass surveillance risks around face recognition in consumer devices.

Simple Tips to Take Control of Your Privacy

Online privacy isn’t dead. But tech giants have tried to make protecting it as annoying as possible. To help users take back control, we celebrated Opt Out October, sharing daily privacy tips all month long on our blog. In addition to laying down some privacy basics, EFF's Thorin Klosowski talked to EFFector about how small steps to protect your data can build up into big differences.

"This is a way to kind of break it down into small tasks that you can do every day and accomplish a lot," said Klosowski. "By the end of it, you will have taken back a considerable amount of your privacy."

User privacy was the focus of a number of EFFector interviews. In July, EFF's Lena Cohen spoke about what lawmakers, tech companies, and individuals can do to fight online tracking. That same month, Matthew Guariglia talked about precautions consumers can take before bringing surveillance devices like smart doorbells into their homes.

Digging Into the Next Wave of Internet Censorship

One of the most troubling trends of 2025 was the proliferation of age verification laws, which require online services to check, estimate, or verify users’ ages. Though these mandates claim to protect children, they ultimately create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

This summer, EFF's Rin Alajaji came onto EFFector Audio to explain how these laws work and why we need to speak out against them.

"Every person listening here can push back against these laws that expand censorship," she said. "We like to say that if you care about internet freedom, this fight is yours."

This was just one of several interviews about free speech online. This year, EFFector also hosted Paige Collings to talk about the chaotic rollout of the UK's Online Safety Act and Lisa Femia (again!) to discuss the abortion censorship crisis on social media.

You can hear all these episodes and future installments of EFFector's audio companion on YouTube or the Internet Archive. Or check out our revamped EFFector newsletter by subscribing at eff.org/effector!

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Hudson Hongo

Procurement Power—When Cities Realized They Can Just Say No: 2025 in Review

2 days 11 hours ago

In 2025, elected officials across the country began treating surveillance technology purchases differently: not as inevitable administrative procurements handled by police departments, but as political decisions subject to council oversight and constituent pressure. This shift proved to be the most effective anti-surveillance strategy of the year.

Since February, at least 23 jurisdictions fully ended, cancelled, or rejected Flock Safety ALPR programs (including Austin, Oak Park, Evanston, Hays County, San Marcos, Eugene, Springfield, and Denver) by recognizing surveillance procurement as political power, not administrative routine.

Legacy Practices & Obfuscation

For decades, cities have been caught in what researchers call "legacy procurement practices": administrative norms that prioritize "efficiency" and "cost thresholds" over democratic review. 

Vendors exploit this inertia through the "pilot loophole." As Taraaz and the Collaborative Research Center for Resilience (CRCR) note in a recent report, "no-cost offers" and free trials allow police departments to bypass formal procurement channels entirely. By the time the bill comes due, the surveillance is already normalised in the community, turning a purchase decision into a "continuation of service" that is politically difficult to stop.

This bureaucracy obscures the power that surveillance vendors have over municipal procurement decisions. As Arti Walker-Peddakotla details, this is a deliberate strategy. Walker-Peddakotla details how vendors secure "acquiescence" by hiding the political nature of surveillance behind administrative veils: framing tools as "force multipliers" and burying contracts in consent agendas. For local electeds, the pressure to "outsource" government decision-making makes vendor marketing compelling. Vendors use "cooperative purchasing" agreements to bypass competitive bidding, effectively privatizing the policy-making process. 

The result is a dangerous "information asymmetry" where cities become dependent on vendors for critical data governance decisions. The 2025 cancellations finally broke that dynamic.

The Procurement Moment

This year, cities stopped accepting this "administrative" frame. The shift came from three converging forces: audit findings that exposed Flock's lack of safeguards, growing community organizing pressure, and elected officials finally recognizing that saying "no" to a renewal was not just an option—it was the responsible choice.

When Austin let its Flock pilot expire on July 1, the decision reflected a political judgment: constituents rejected a nationwide network used for immigration enforcement. It wasn't a debate about retention rates; it was a refusal to renew.

These cancellations were also acts of fiscal stewardship. By demanding evidence of efficacy (and receiving none) officials in Hays County, Texas and San Marcos, Texas rejected the "force multiplier" myth. They treated the refusal of unproven technology not just as activism, but as a basic fiduciary duty. In Oak Park, Illinois, trustees cancelled eight cameras after an audit found Flock lacked safeguards, while Evanston terminated its 19-camera network shortly after. Eugene and Springfield, Oregon terminated 82 combined cameras in December. City electeds have also realized that every renewal is a vote for "vendor lock-in." As EPIC warns, once proprietary systems are entrenched, cities lose ownership of their own public safety data, making it nearly impossible to switch providers or enforce transparency later.

The shift was not universal. Denver illustrated the tension when Mayor Mike Johnston overrode a unanimous council rejection to extend Flock's contract. Council Member Sarah Parady rightly identified this as "mass surveillance" imposed "with no public process." This is exactly why procurement must be reclaimed: when treated as technical, surveillance vendors control the conversation; when recognized as political, constituents gain leverage.

Cities Hold the Line Against Mass Surveillance

EFF has spent years documenting how procurement functions as a lever for surveillance expansion, from our work documenting Flock Safety's troubling data-sharing practices with ICE and federal law enforcement to our broader advocacy on surveillance technology procurement reform. The 2025 victories show that when cities understand procurement as political rather than technical, they can say no. Procurement power can be the most direct route to stopping mass surveillance. 

As cities move into 2026, the lesson is clear: surveillance is a choice, not a mandate, and your community has the power to refuse it. The question isn't whether technology can police more effectively; it's whether your community wants to be policed this way. That decision belongs to constituents, not vendors.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Sarah Hamid

Defending Encryption in the U.S. and Abroad: 2025 in Review

2 days 13 hours ago

Defending encryption has long been a bedrock of our work. Without encryption, it's impossible to have private conversations or private data storage. This year, we’ve seen attacks on these rights from all around the world. 

Europe Goes All in On Breaking Encryption, Mostly Fails (For Now)

The European Union Council has repeatedly tried to pass a controversial message scanning proposal, known as “Chat Control,” that would require secure messaging providers to scan the contents of messages. Every time this has come up since it was first introduced in 2022, it got batted down—because no matter how you slice it, client-side scanning breaks end-to-end encryption. The Danish presidency seemed poised to succeed in passing Chat Control this year, but strong pushback from across the EU caused them to reconsider and rework their stance. In its current state, Chat Control isn’t perfect, but it at least includes strong language to protect encryption, which is good news for users. 

Meanwhile, France tried to pass its own encryption-breaking legislation. Unlike Chat Control, which pushed for client-side scanning, France took a different approach: allowing so-called “ghost participants,” where law enforcement could silently join encrypted chats. Thankfully, the French National Assembly did the right thing and rejected this dangerous proposal

It wasn’t all wins, though.

Perhaps the most concerning encryption issue is still ongoing in the United Kingdom, where the British government reportedly ordered Apple to backdoor its optional end-to-end encryption in iCloud. In response, Apple disabled one of its strongest security features, Advanced Data Protection, for U.K. users. After some back and forth with the U.S., the U.K. allegedly rewrote the demand, to clarify it was limited to only apply to British users. That doesn’t make it any better. Tribunal hearings are planned for 2026, and we’ll continue to monitor developments.

Speaking of developments to keep an eye on, the European Commission released its “Technology Roadmap on Encryption” which discusses new ways for law enforcement to access encrypted data. There’s a lot that could happen with this roadmap, but let’s be clear, here: EU officials should scrap any roadmap focused on encryption circumvention and instead invest in stronger, more widespread use of end-to-end encryption. 

U.S. Attempts Fall Flat

The U.S. had its share of battles, too. The Senate re-introduced the STOP CSAM Act, which threatened to compromise encryption by requiring encrypted communication providers to have knowledge about what sorts of content their services are being used to send. The bill allows encrypted services to raise a legal defense—but only after they’ve been sued. That's not good enough. STOP CSAM would force encryption providers to defend against costly lawsuits over content they can't see or control. And a jury could still consider the use of encryption to be evidence of wrongdoing. 

In Florida, a bill ostensibly about minors' social media use also just so happened to demand a backdoor into encryption services—already an incredible overreach. It went further, attempting to ban disappearing messages and grant parents unrestricted access to their kids’ messages as well. Thankfully, the Florida Legislature ended without passing it.

It is unlikely these sorts of attempts to undermine encryption will suddenly stop. But whatever comes next, EFF will continue to stand up for everyone's right to use encryption to have secure and private online communications. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Thorin Klosowski

Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review

3 days 8 hours ago

State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim. 

Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online. 

These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression. 

On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices. 

Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics

EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

India McKinney

From Speakeasies to DEF CON—Celebrating With EFF Members: 2025 Year In Review

3 days 8 hours ago

It’s been a great year to be on EFF’s membership team. There's no better feeling than hanging out with your fellow digital freedom supporters and being able to say, “Oh yeah, and we’re suing the government!” We’ve done that a lot this year—and that’s all thanks to people like you. 

As a token of appreciation for supporting EFF’s mission to protect privacy and free expression online for all people, we put a lot of care into meeting the members who make our work possible. Whether it’s hosting meetups, traveling to conferences, or finding new and fun ways to explain what we’re fighting for, connecting with you is always a highlight of the job.

EFF Speakeasy Meet Ups

One of my favorite perks we offer for EFF members is exclusive invites for Speakeasy meet ups. It’s a chance for us to meet the very passionate members who fuel our work! 

This year, we hosted Speakeasies across the country while making the rounds at conferences. We met supporters in Mesa, AZ during CactusCon; Pasadena, CA during SCALE; Portland, OR during BSidesPDX; New York, NY during HOPE and BSidesNYC; and Seattle, WA during our panel at the University of Washington. 

Of course, we also had to host a Speakeasy in our home court—and for the first time it took place in the South Bay Area in Mountain View, CA at Hacker Dojo! There, members of EFF’s D.C. Legislative team spoke about EFF’s legislative efforts and how they’ll shape digital rights for all. We even recorded that conversation for you to watch on YouTube or the Internet Archive

And we can’t forget about our global community! Our annual online Speakeasy brought together members around the world for a conversation and Q&A with our friends at Women in Security and Privacy (WISP) about online behavioral tracking and the data broker industry. We heard and answered great questions about pushing back on online tracking and what legislative steps we can take to strengthen privacy. 

Summer Security Conferences

Say what you will about Vegas—nothing compares to the energy of seeing thousands of EFF supporters during the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week.  

If you’ve ever seen us at a conference, you know the drill: a table full of EFF staff frantically handing out swag, answering questions, and excitedly saying hi to everyone that stops by and supports our work. This year it was especially fun to see how many people brought their Rayhunter devices

And of course, it wouldn’t be a trip to Vegas without EFF’s annual DEF CON Poker Tournament. This year 48 supporters and friends played for money, glory, and the future of the web—all with EFF’s very own playing cards. For the first time ever, the jellybean trophy went to the same winner two years in a row! 

img_6123-web.jpg

EFFecting Change Livestream Series

We ramped up our livestream series, EFFecting Change, this year with a total of six livestreams covering topics including the future of social media with guests from Mastodon, Bluesky, and Spill; EFF’s 35th Anniversary and what’s next in the fight for privacy and free speech online; and generative AI, including how to address the risks of the technology while protecting civil liberties and human rights online. 

We’ve got more in store for EFFecting Change in 2026, so be sure to stay up-to-date by signing up for updates

EFF Awards Ceremony

EFF is at the forefront of protecting users from dystopian surveillance and unjust censorship online. But we’re not the only one doing this work, and we couldn’t do it without other organizations in the space. So, every year we like to award those who are courageously championing the digital rights movement. 

This year we gave out three awards: the EFF Award for Defending Digital Freedoms went to Software Freedom Law Center, India, the EFF Award for Protecting Americans’ Data went to Erie Meyer, and the EFF Award for Leading Immigration and Surveillance Litigation went to Just Futures Law. You can watch the EFF Awards here and see photos from the event too!


And It's All Thanks to You

That doesn’t even cover all of it! We even got to celebrate 35 years of EFF in July with limited-edition challenge coins and all-new member swag—plus a livestream covering EFF’s history and what’s next for us.

Grab EFF's 35th Anniversary t-shirt when you become a member today!

As the new year approaches, I always like to look back on the bright spots—especially the joy of hanging out with this incredible community. The world can feel hectic, but connecting with supporters like you is a reminder of how much good we can build when we work together. 

Many thanks to all of the EFF members who joined forces with us this year. If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Christian Romero

Local Communities Are Winning Against ALPR Surveillance—Here’s How: 2025 in Review

3 days 11 hours ago

Across ideologically diverse communities, 2025 campaigns against automated license plate reader (ALPR) surveillance kept winning. From Austin, Texas to Cambridge, Massachusetts to Eugene, Oregon, successful campaigns combined three practical elements: a motivated political champion on city council, organized grassroots pressure from affected communities, and technical assistance at critical decision moments.

The 2025 Formula for Refusal

  • Institutional Authority: Council members leveraging "procurement power"—local democracy's most underutilized tool—to say no. 
  • Community Mobilization: A base that refuses to debate "better policy" and demands "no cameras." 
  • Shared Intelligence: Local coalitions utilizing shared research on contract timelines and vendor breaches.
Practical Wins Over Perfect Policies

In 2025, organizers embraced the "ugly" win: prioritizing immediate contract cancellations over the "political purity" of perfect privacy laws. Procurement fights are often messy, bureaucratic battles rather than high-minded legislative debates, but they stop surveillance where it starts—at the checkbook. In Austin, more than 30 community groups built a coalition that forced a contract cancellation, achieving via purchasing power what policy reform often delays. 

In Hays County, Texas, the victory wasn't about a new law, but a contract termination. Commissioner Michelle Cohen grounded her vote in vendor accountability, explaining: "It's more about the company's practices versus the technology." These victories might lack the permanence of a statute, but every camera turned off built a culture of refusal that made the next rejection easier. This was the organizing principle: take the practical win and build on it.

Start with the Harm

Winning campaigns didn't debate technical specifications or abstract privacy principles. They started with documented harms that surveillance enabled. EFF's research showing police used Flock's network to track Romani people with discriminatory search terms, surveil women seeking abortion care, and monitor protesters exercising First Amendment rights became the evidence organizers used to build power.

In Olympia, Washington, nearly 200 community members attended a counter-information rally outside city hall on Dec. 2. The DeFlock Olympia movement countered police department claims point-by-point with detailed citations about data breaches and discriminatory policing. By Dec. 3, cameras had been covered pending removal.

In Cambridge, the city council voted unanimously in October to pause Flock cameras after residents, the ACLU of Massachusetts, and Digital Fourth raised concerns. When Flock later installed two cameras "without the city's awareness," a city spokesperson  called it a "material breach of our trust" and terminated the contract entirely. The unexpected camera installation itself became an organizing moment.

The Inside-Outside Game

The winning formula worked because it aligned different actors around refusing vehicular mass surveillance systems without requiring everyone to become experts. Community members organized neighbors and testified at hearings, creating political conditions where elected officials could refuse surveillance and survive politically. Council champions used their institutional authority to exercise "procurement power": the ability to categorically refuse surveillance technology.

To fuel these fights, organizers leveraged technical assets like investigation guides and contract timeline analysis. This technical capacity allowed community members to lead effectively without needing to become policy experts. In Eugene and Springfield, Oregon, Eyes Off Eugene organized sustained opposition over months while providing city council members political cover to refuse. "This is [a] very wonderful and exciting victory," organizer Kamryn Stringfield said. "This only happened due to the organized campaign led by Eyes Off Eugene and other local groups."

Refusal Crosses Political Divides

A common misconception collapsed in 2025: that surveillance technology can only be resisted in progressive jurisdictions. San Marcos, Texas let its contract lapse after a 3-3 deadlock, with Council Member Amanda Rodriguez questioning whether the system showed "return on investment." Hays County commissioners in Texas voted to terminate. Small towns like Gig Harbor, Washington rejected proposals before deployment. 

As community partners like the Rural Privacy Coalition emphasize, "privacy is a rural value." These victories came from communities with different political cultures but shared recognition that mass surveillance systems weren't worth the cost or risk regardless of zip code.

Communities Learning From Each Other

In 2025, communities no longer needed to build expertise from scratch—they could access shared investigation guides, learn from victories in neighboring jurisdictions, and connect with organizers who had won similar fights. When Austin canceled its contract, it inspired organizing across Texas. When Illinois Secretary of State's audit revealed illegal data sharing with federal immigration enforcement, Evanston used those findings to terminate 19 cameras.

The combination of different forms of power—institutional authority, community mobilization, and shared intelligence—was a defining feature of this year's most effective campaigns. By bringing these elements together, community coalitions have secured cancellations or rejections in nearly two dozen jurisdictions since February, building the infrastructure to make the next refusal easier and the movement unstoppable.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Sarah Hamid

States Take On Tough Tech Policy Battles: 2025 in Review

4 days 5 hours ago

State legislatures—from Olympia, WA, to Honolulu, HI, to Tallahassee, FL, and everywhere in between—kept EFF’s state legislative team busy throughout 2025.

We saw some great wins and steps forward this year. Washington became the eighth state to enshrine the right to repair. Several states stepped up to protect the privacy of location data, with bills recognizing your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. Other state legislators moved to protect health privacy. And California passed a law making it easier for people to exercise their privacy rights under the state’s consumer data privacy law.

Several states also took up debates around how to legislate and regulate artificial intelligence and its many applications. We’ll continue to work with allies in states including California and Colorado to proposals that address the real harms from some uses of AI, without infringing on the rights of creators and individual users.

We’ve also fought some troubling bills in states across the country this year. In April, Florida introduced a bill that would have created a backdoor for law enforcement to have easy access to messages if minors use encrypted platforms. Thankfully, the Florida legislature did not pass the bill this year. But it should set off serious alarm bells for anyone who cares about digital rights. And it was just one of a growing set of bills from states that, even when well-intentioned, threaten to take a wrecking ball to privacy, expression, and security in the name of protecting young people online.

Take, for example, the burgeoning number of age verification, age gating, age assurance, and age estimation bills. Instead of making the internet safer for children, these laws can incentivize or intersect with existing systems that collect vast amounts of data to force all users—regardless of age—to verify their identity just to access basic content or products. South Dakota and Wyoming, for example, are requiring any website that hosts any sexual content to implement age verification measures. But, given the way those laws are written, that definition could include essentially any site that allows user-generated or published content without age-based gatekeeping access. That could include everyday resources such as social media networks, online retailers, and streaming platforms.

Lawmakers, not satisfied with putting age gates on the internet, are also increasingly going after VPNs (virtual private networks) to prevent anyone from circumventing these new digital walls. VPNs are not foolproof tools—and they shouldn’t be necessary to access legally protected speech—but they should be available to people who want to use them. We will continue to stand against these types of bills, not just for the sake of free expression, but to protect the free flow of information essential to a free society.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Hayley Tsukayama

Fighting to Keep Bad Patents in Check: 2025 in Review

4 days 11 hours ago

A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges. 

Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents. 

EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.

Congress Weighed Bills That Would Undo Core Safeguards

The Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.

Together, these bills would have dismantled much of the progress made over the last decade. 

We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.” 

EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it. 

Patent Office Shifts To An “Era of No”

Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.

Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.

2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office. 

The October Rulemaking: Making Bad Patents Untouchable

The USPTO’s proposed rule changes would: 

  • Force defendants to surrender their court defenses if they use IPR—an intense burden for anyone actually facing a lawsuit. 
  • Make patents effectively unchallengeable after a single prior dispute, even if that challenge was limited, incomplete, or years out of date.
  • Block IPR entirely if a district court case is projected to move faster than the Patent Trial and Appeal Board (PTAB). 

These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back. 

We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it

In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.

The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.

GitHub emphasized the increased risk and litigation cost for open-source communities.

Twenty-two patent law professors called the proposal unlawful and harmful to innovation.

Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.

Heading Into 2026

The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.

But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Joe Mullin

Defending Access to Abortion Information Online: 2025 in Review

4 days 12 hours ago

As reproductive rights face growing attacks globally, access to content about reproductive healthcare and abortion online has never been more critical. The internet has essential information on topics like where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Reproductive rights activists use the internet to organize and build community, and healthcare providers rely on it to distribute accurate information to people in need. And for those living in one of the 20+ states where abortion is banned or heavily restricted, the internet is often the only place to find these potentially life-saving resources.  

Nonetheless, both the government and private platforms are increasingly censoring abortion-related speech, at a time when we need it most. Anti-abortion legislators are actively trying to pass laws to limit online speech about abortion, making it harder to share critical resources, discuss legal options, seek safe care, and advocate for reproductive rights. At the same time, social media platforms have increasingly cracked down on abortion-related content, leading to the suppression, shadow-banning, and outright removal of posts and accounts.  

This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.   

As defenders of free expression and access to information online, we have a role to play in understanding where and how this is happening, shining a light on practices that endanger these rights, and taking action to ensure they’re protected. This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.   

Exposing Social Media Censorship 

At the start of 2025, we launched the #StopCensoringAbortion campaign to collect and spotlight the growing number of stories from users that have had abortion-related content censored by social media platforms. Our goal was to better understand how and why this is happening, raise awareness, and hold the platforms accountable.  

Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and influencers around the world, we confirmed what many already suspected: this speech is being removed and restricted by platforms at an alarming rate. Across the submissions we received, we saw a pattern of over enforcement, lack of transparency, and arbitrary moderation decisions aimed at reproductive health and reproductive justice advocates.  

Notably, almost none of the submissions we reviewed actually violated the platforms’ stated policies. The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But the content being removed wasn’t selling medications. Most of the censored posts simply provided factual, educational information—content that’s expressly allowed by Meta.  

In a month-long 10-part series, we broke down our findings. We examined the trends we saw, including stories of individuals and organizations who needed to rely on internal connections at Meta to get wrongfully censored posts restored, examples of account suspensions without sufficient warnings, and an exploration of Meta policies and how they are wrongly applied. We provided practical tips for users to protect their posts from being removed, and we called on platforms to adopt steps to ensure transparency, a functional appeals process, more human review of posts, and consistent and fair enforcement of rules.  

Social media platforms have a First Amendment right to curate the content on their sites—they can remove whatever content they want—and we recognize that. But companies like Meta claim they care about free speech, and their policies explicitly claim to allow educational information and discussions about abortion. We think they have a duty to live up to those promises. Our #StopCensoringAbortion campaign clearly shows that this isn’t happening and underscores the urgent need for platforms to review and consistently enforce their policies fairly and transparently.  

Combating Legislative Attacks on Free Speech  

On top of platform censorship, lawmakers are trying to police what people can say and see about abortion online. So in 2025, we also fought against censorship of abortion information on the legislative front.  

EFF opposed Texas Senate Bill (S.B.) 2880, which would not only outlaw the sale and distribution of abortion pills, but also make it illegal to “provide information” on how to obtain an abortion-inducing drug. Simply having an online conversation about mifepristone or exchanging emails about it could run afoul of the law.  

On top of going after online speakers who create and post content themselves, the bill also targeted social media platforms, websites, email services, messaging apps, and any other “interactive computer service” simply for hosting or making that content available. This was a clear attempt by Texas legislators to keep people from learning about abortion drugs, or even knowing that they exist, by wiping this information from the internet altogether.  

We laid out the glaring free-speech issues with S.B. 2880 and explained how the consequences would be dire if passed. And we asked everyone who cares about free speech to urge lawmakers to oppose this bill, and others like it. Fortunately, these concerns were heard, and the bill never became law.

Our team also spent much of the year fighting dangerous age verification legislation, often touted as “child safety” bills, at both the federal and state level. We raised the alarm on how age verification laws pose significant challenges for users trying to access critical content—including vital information about sexual and reproductive health. By age-gating the internet, these laws could result in websites requiring users to submit identification before accessing information about abortion or reproductive healthcare. This undermines the ability to remain private and anonymous while searching for abortion information online. 

Protecting Life-Saving Information Online 

Abortion information saves lives, and the internet is a primary (and sometimes only) source where people can access it.  

As attacks on abortion information intensify, EFF will continue to fight so that users can post, host, and access abortion-related content without fear of being silenced. We’ll keep pushing for greater accountability from social media platforms and fighting against harmful legislation aimed at censoring these vital resources. The fight is far from over, but we will remain steadfast in ensuring that everyone, regardless of where they live, can access life-saving information and make informed decisions about their health and rights.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Jennifer Pinsof

Artificial Intelligence, Copyright, and the Fight for User Rights: 2025 in Review

5 days 10 hours ago

A tidal wave of copyright lawsuits against AI developers threatens beneficial uses of AI, like creative expression, legal research, and scientific advancement. How courts decide these cases will profoundly shape the future of this technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly. As these cases finished their trials and moved to appeals courts in 2025, EFF intervened to defend fair use, promote competition, and protect everyone’s rights to build and benefit from this technology.

At the same time, rightsholders stepped up their efforts to control fair uses through everything from state AI laws to technical standards that influence how the web functions. In 2025, EFF fought policies that threaten the open web in the California State Legislature, the Internet Engineering Task Force, and beyond.

Fair Use Still Protects Learning—Even by Machines

Copyright lawsuits against AI developers often follow a similar pattern: plaintiffs argue that use of their works to train the models was infringement and then developers counter that their training is fair use. While legal theories vary, the core issue in many of these cases is whether using copyrighted works to train AI is a fair use.

We think that it is. Courts have long recognized that copying works for analysis, indexing, or search is a classic fair use. That principle doesn’t change because a statistical model is doing the reading. AI training is a legitimate, transformative fair use, not a substitute for the original works.

More importantly, expanding copyright would do more harm than good: while creators have legitimate concerns about AI, expanding copyright won’t protect jobs from automation. But overbroad licensing requirements risk entrenching Big Tech’s dominance, shutting out small developers, and undermining fair use protections for researchers and artists. Copyright is a tool that gives the most powerful companies even more control—not a check on Big Tech. And attacking the models and their outputs by attacking training—i.e. “learning” from existing works—is a dangerous move. It risks a core principle of freedom of expression: that training and learning—by anyone—should not be endangered by restrictive rightsholders.

In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies, but in 2025, things began to speed up.

But some cases have already reached courts of appeal. We advocated for fair use rights and sensible limits on copyright in amicus briefs filed in Doe v. GitHub, Thomson Reuters v. Ross Intelligence, and Bartz v. Anthropic, three early AI copyright appeals that could shape copyright law and influence dozens of other cases. We also filed an amicus brief in Kadrey v. Meta, one of the first decisions on the merits of the fair use defense in an AI copyright case.

How the courts decide the fair use questions in these cases could profoundly shape the future of AI—and whether legacy gatekeepers will have the power to control it. As these cases move forward, EFF will continue to defend your fair use rights.

Protecting the Open Web in the IETF

Rightsholders also tried to make an end-run around fair use by changing the technical standards that shape much of the internet. The IETF, an Internet standards body, has been developing technical standards that pose a major threat to the open web. These proposals would give websites to express “preference signals” against certain uses of scraped data—effectively giving them veto power over fair uses like AI training and web search.

Overly restrictive preference signaling threatens a wide range of important uses—from accessibility tools for people with disabilities to research efforts aimed at holding governments accountable. Worse, the IETF is dominated by publishers and tech companies seeking to embed their business models into the infrastructure of the internet. These companies aren’t looking out for the billions of internet users who rely on the open web.

That’s where EFF comes in. We advocated for users’ interests in the IETF, and helped defeat the most dangerous aspects of these proposals—at least for now.

Looking Ahead

The AI copyright battles of 2025 were never just about compensation—they were about control. EFF will continue working in courts, legislatures, and standards bodies to protect creativity and innovation from copyright maximalists.

Tori Noble

Age Verification Threats Across the Globe: 2025 in Review

5 days 12 hours ago

Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space. 

The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented. 

Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.

In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:

Social Media Bans for Young People

Banning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight

Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.

Push for Age Assurance on All Users 

This year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.

We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.

Rushed Age Assurance through the EU Digital Wallet

There is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.

These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online. 

However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.  

Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety

Ways Forward

The internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. 

Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Paige Collings

EFF’s ‘How to Fix the Internet’ Podcast: 2025 in Review

6 days 13 hours ago

2025 was a stellar year for EFF’s award-winning podcast, “How to Fix the Internet,” as our sixth season focused on the tools and technology of freedom. 

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futures—from tracking-based surveillance capitalism, to street level government surveillance, to the dominance of a few large platforms choking innovation, to the growing efforts by authoritarian governments to control what we see and say—the landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building solutions. That’s where our podcast comes in. 

EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, EFF Executive Director Cindy Cohn and Activism Director Jason Kelley explore creative solutions to some of today’s biggest tech challenges. Our sixth season, which ran from May through September, featured: 

  • 2025-htfi-kate-b-episode-art.pngDigital Autonomy for Bodily Autonomy” – We all leave digital trails as we navigate the internet—records of what we searched for, what we bought, who we talked to, where we went or want to go in the real world—and those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? Digital Defense Fund Director Kate Bertash joined Cindy and Jason to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 
  • 2025-htfi-molly-episode.pngLove the Internet Before You Hate On It” – There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. Molly White—a researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech joined Cindy and Jason to discuss working toward a human-centered internet that gives everyone a sense of control and interaction; open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 
  • 2025-htfi-isabela-episode.pngWhy Three is Tor's Magic Number” – Many in Silicon Valley, and in U.S. business at large, seem to believe innovation springs only from competition, a race to build the next big thing first, cheaper, better, best. But what if collaboration and community breeds innovation just as well as adversarial competition? Tor Project Executive Director Isabela Fernandes joined Cindy and Jason to discuss the importance of not just accepting technology as it’s given to us, but collaboratively breaking it, tinkering with it, and rebuilding it together until it becomes the technology that we really need to make our world a better place. 
  • 2025-htfi-harlo-episode.pngSecuring Journalism on the ‘Data-Greedy’ Internet” – Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs? Freedom of the Press Foundation Digital Security Director Harlo Holmes joined Cindy and Jason to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed. 
  • 2025-htfi-deirdre-episode.pngCryptography Makes a Post-Quantum Leap” – The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problems—and the cryptography they underpin—are no longer infeasible for computers to solve? Will our online defenses collapse? Research and applied cryptographer Deirdre Connolly joined Cindy and Jason to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 
  • 2025-htfi-helen-episode.pngFinding the Joy in Digital Security” – Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning and retain more knowledge when they're having a good time. It doesn’t mean the topic isn’t serious—it’s just about intentionally approaching a serious topic with joy. East Africa digital security trainer Helen Andromedon joined Cindy and Jason to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 
  • 2025-htfi-kara-episode.pngSmashing the Tech Oligarchy” – Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more. Tech journalist and critic Kara Swisher joined Cindy and Jason to discuss regulation that can keep people safe online without stifling innovation, creating an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 
  • 2025-htfi-arvind-episode.jpgSeparating AI Hope from AI Hype” – If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. Princeton Professor and “AI Snake Oil” publisher Arvind Narayanan joined Cindy and Jason to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 
  • 2025-htfi-neuro-episode.jpgProtecting Privacy in Your Brain” – Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it—a Pandora’s box of epic proportions. Neuroscientist Rafael Yuste and human rights lawyer Jared Genser, co-founders of The Neurorights Foundation, joined Cindy and Jason to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 
  • 2025-htfi-brewster-episode.jpgBuilding and Preserving the Library of Everything” – Access to knowledge not only creates an informed populace that democracy requires but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed of—so long as that knowledge is allowed to flow freely. Internet Archive founder and digital librarian Brewster Kahle joined Cindy and Jason to discuss how the free flow of knowledge makes all of us more free.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Josh Richman

Politicians Rushed Through An Online Speech “Solution.” Victims Deserve Better.

6 days 13 hours ago

Earlier this year, both chambers of Congress passed the TAKE IT DOWN Act. This bill, while well-intentioned, gives powerful people a new legal tool to force online platforms to remove lawful speech that they simply don't like. 

The bill, sponsored by Senate Commerce Chair Ted Cruz (R-TX) and Rep. Maria Salazar (R-FL), sought to speed up the removal of troubling online content: non-consensual intimate imagery (NCII). The spread of NCII is a serious problem, as is digitally altered NCII, sometimes called “deepfakes.” That’s why 48 states have specific laws criminalizing the distribution of NCII, in addition to the long-existing defamation, harassment, and extortion statutes—all of which can be brought to bear against those who abuse NCII. Congress can and should protect victims of NCII by enforcing and improving these laws. 

Unfortunately, TAKE IT DOWN takes another approach: it creates an unneeded notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it seeks to solve. 

While Congress was still debating the bill, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate outlining our concerns with the proposal. 

First, TAKE IT DOWN’s removal provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the law. We worry that bad-faith actors will use the law’s expansive definition to remove lawful speech that is not NCII and may not even contain sexual content. 

Worse, the law contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The law requires that apps and websites remove content within 48 hours or face significant legal risks. That ultra-tight deadline means that small apps or websites will have to comply so quickly to avoid legal risk, that they won’t be able to investigate or verify claims. 

Finally, there are no legal protections for providers when they believe a takedown request was sent in bad faith to target lawful speech. TAKE IT DOWN is a one-way censorship ratchet, and its fast timeline discourages providers from standing up for their users’ free speech rights. 

This new law could lead to the use of automated filters that tend to flag legal content, from commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers can’t view the contents of messages on their platforms. Platforms could respond by abandoning encryption entirely in order to be able to monitor content, turning private conversations into surveilled spaces.

We asked for several changes to protect legitimate speech that is not NCII, and to include common-sense safeguards for encryption. Thousands of EFF members joined us by writing similar messages to their Senators and Representatives. That resulted in several attempts to offer common-sense amendments during the Committee process. 

However, Congress passed the bill without those needed changes, and it was signed into law in May 2025. The main takedown provisions of the bill will take effect in 2026. We’ll be pushing online platforms to be transparent about the content they take down because of this law, and will be on the watch for takedowns that overreach and censor lawful speech. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

India McKinney

How to Sustain Privacy & Free Speech

1 week ago

The world has been forced to bear the weight of billionaires and politicians who salivate over making tech more invasive, more controlling, and more hostile. That's why EFF’s mission for your digital rights is crucial, and why your support matters more than ever. You can fuel the fight for privacy and free speech with as little as $5 or $10 a month:

Join EFF

Become a Monthly Sustaining Donor

When you donate by December 31, your monthly support goes even further by unlocking bonus Year-End Challenge grants! With your help, EFF can receive up to seven grants that increase in size as the number of supporters grows (check our progress on the counter). Many thanks to EFF’s Board of Directors for creating the 2025 challenge fund.

The EFF team makes every dollar count. EFF members giving just $10 or less each month raised $400,000 for digital rights in the last year. That funds court motions, software development, educational campaigns, and investigations for the public good every day. EFF member support matters, and we need you.

📣 Stand Together: That’s How We Win 📣

You can help EFF hold corporations and authoritarians to account. We fight for tech users in the courts and we lobby and educate lawmakers, all while developing free privacy-enhancing tech and educational resources so people can protect themselves now. Your monthly donation will keep us going strong in this pivotal moment.

Get your choice of free gear when you join EFF!

Your privacy online and the right to express yourself are powerful—and it’s the reason authoritarians work so viciously to take them away. But together, we can make sure technology remains a tool for the people. Become a monthly Sustaining Donor or give a one-time donation of any size by December 31 and unlock additional Year-End Challenge grants!

Give Today

Unlock Year-End Challenge Grants

Already an EFF Member? Help Us Spread the Word!

EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:


We need to stand together and ensure technology works for us, not against us. Donate any amount to EFF by Dec 31, and you'll help unlock challenge grants! https://eff.org/yec
Bluesky Facebook | LinkedIn | Mastodon
(more at eff.org/social)

_________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Maggie Kazmierczak
Checked
55 minutes 36 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed