EFF’s ‘How to Fix the Internet’ Podcast: 2025 in Review

3 days 6 hours ago

2025 was a stellar year for EFF’s award-winning podcast, “How to Fix the Internet,” as our sixth season focused on the tools and technology of freedom. 

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futures—from tracking-based surveillance capitalism, to street level government surveillance, to the dominance of a few large platforms choking innovation, to the growing efforts by authoritarian governments to control what we see and say—the landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building solutions. That’s where our podcast comes in. 

EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, EFF Executive Director Cindy Cohn and Activism Director Jason Kelley explore creative solutions to some of today’s biggest tech challenges. Our sixth season, which ran from May through September, featured: 

  • 2025-htfi-kate-b-episode-art.pngDigital Autonomy for Bodily Autonomy” – We all leave digital trails as we navigate the internet—records of what we searched for, what we bought, who we talked to, where we went or want to go in the real world—and those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? Digital Defense Fund Director Kate Bertash joined Cindy and Jason to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 
  • 2025-htfi-molly-episode.pngLove the Internet Before You Hate On It” – There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. Molly White—a researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech joined Cindy and Jason to discuss working toward a human-centered internet that gives everyone a sense of control and interaction; open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 
  • 2025-htfi-isabela-episode.pngWhy Three is Tor's Magic Number” – Many in Silicon Valley, and in U.S. business at large, seem to believe innovation springs only from competition, a race to build the next big thing first, cheaper, better, best. But what if collaboration and community breeds innovation just as well as adversarial competition? Tor Project Executive Director Isabela Fernandes joined Cindy and Jason to discuss the importance of not just accepting technology as it’s given to us, but collaboratively breaking it, tinkering with it, and rebuilding it together until it becomes the technology that we really need to make our world a better place. 
  • 2025-htfi-harlo-episode.pngSecuring Journalism on the ‘Data-Greedy’ Internet” – Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs? Freedom of the Press Foundation Digital Security Director Harlo Holmes joined Cindy and Jason to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed. 
  • 2025-htfi-deirdre-episode.pngCryptography Makes a Post-Quantum Leap” – The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problems—and the cryptography they underpin—are no longer infeasible for computers to solve? Will our online defenses collapse? Research and applied cryptographer Deirdre Connolly joined Cindy and Jason to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 
  • 2025-htfi-helen-episode.pngFinding the Joy in Digital Security” – Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning and retain more knowledge when they're having a good time. It doesn’t mean the topic isn’t serious—it’s just about intentionally approaching a serious topic with joy. East Africa digital security trainer Helen Andromedon joined Cindy and Jason to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 
  • 2025-htfi-kara-episode.pngSmashing the Tech Oligarchy” – Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more. Tech journalist and critic Kara Swisher joined Cindy and Jason to discuss regulation that can keep people safe online without stifling innovation, creating an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 
  • 2025-htfi-arvind-episode.jpgSeparating AI Hope from AI Hype” – If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. Princeton Professor and “AI Snake Oil” publisher Arvind Narayanan joined Cindy and Jason to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 
  • 2025-htfi-neuro-episode.jpgProtecting Privacy in Your Brain” – Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it—a Pandora’s box of epic proportions. Neuroscientist Rafael Yuste and human rights lawyer Jared Genser, co-founders of The Neurorights Foundation, joined Cindy and Jason to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 
  • 2025-htfi-brewster-episode.jpgBuilding and Preserving the Library of Everything” – Access to knowledge not only creates an informed populace that democracy requires but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed of—so long as that knowledge is allowed to flow freely. Internet Archive founder and digital librarian Brewster Kahle joined Cindy and Jason to discuss how the free flow of knowledge makes all of us more free.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Josh Richman

Politicians Rushed Through An Online Speech “Solution.” Victims Deserve Better.

3 days 6 hours ago

Earlier this year, both chambers of Congress passed the TAKE IT DOWN Act. This bill, while well-intentioned, gives powerful people a new legal tool to force online platforms to remove lawful speech that they simply don't like. 

The bill, sponsored by Senate Commerce Chair Ted Cruz (R-TX) and Rep. Maria Salazar (R-FL), sought to speed up the removal of troubling online content: non-consensual intimate imagery (NCII). The spread of NCII is a serious problem, as is digitally altered NCII, sometimes called “deepfakes.” That’s why 48 states have specific laws criminalizing the distribution of NCII, in addition to the long-existing defamation, harassment, and extortion statutes—all of which can be brought to bear against those who abuse NCII. Congress can and should protect victims of NCII by enforcing and improving these laws. 

Unfortunately, TAKE IT DOWN takes another approach: it creates an unneeded notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it seeks to solve. 

While Congress was still debating the bill, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate outlining our concerns with the proposal. 

First, TAKE IT DOWN’s removal provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the law. We worry that bad-faith actors will use the law’s expansive definition to remove lawful speech that is not NCII and may not even contain sexual content. 

Worse, the law contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The law requires that apps and websites remove content within 48 hours or face significant legal risks. That ultra-tight deadline means that small apps or websites will have to comply so quickly to avoid legal risk, that they won’t be able to investigate or verify claims. 

Finally, there are no legal protections for providers when they believe a takedown request was sent in bad faith to target lawful speech. TAKE IT DOWN is a one-way censorship ratchet, and its fast timeline discourages providers from standing up for their users’ free speech rights. 

This new law could lead to the use of automated filters that tend to flag legal content, from commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers can’t view the contents of messages on their platforms. Platforms could respond by abandoning encryption entirely in order to be able to monitor content, turning private conversations into surveilled spaces.

We asked for several changes to protect legitimate speech that is not NCII, and to include common-sense safeguards for encryption. Thousands of EFF members joined us by writing similar messages to their Senators and Representatives. That resulted in several attempts to offer common-sense amendments during the Committee process. 

However, Congress passed the bill without those needed changes, and it was signed into law in May 2025. The main takedown provisions of the bill will take effect in 2026. We’ll be pushing online platforms to be transparent about the content they take down because of this law, and will be on the watch for takedowns that overreach and censor lawful speech. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

India McKinney

How to Sustain Privacy & Free Speech

4 days 5 hours ago

The world has been forced to bear the weight of billionaires and politicians who salivate over making tech more invasive, more controlling, and more hostile. That's why EFF’s mission for your digital rights is crucial, and why your support matters more than ever. You can fuel the fight for privacy and free speech with as little as $5 or $10 a month:

Join EFF

Become a Monthly Sustaining Donor

When you donate by December 31, your monthly support goes even further by unlocking bonus Year-End Challenge grants! With your help, EFF can receive up to seven grants that increase in size as the number of supporters grows (check our progress on the counter). Many thanks to EFF’s Board of Directors for creating the 2025 challenge fund.

The EFF team makes every dollar count. EFF members giving just $10 or less each month raised $400,000 for digital rights in the last year. That funds court motions, software development, educational campaigns, and investigations for the public good every day. EFF member support matters, and we need you.

📣 Stand Together: That’s How We Win 📣

You can help EFF hold corporations and authoritarians to account. We fight for tech users in the courts and we lobby and educate lawmakers, all while developing free privacy-enhancing tech and educational resources so people can protect themselves now. Your monthly donation will keep us going strong in this pivotal moment.

Get your choice of free gear when you join EFF!

Your privacy online and the right to express yourself are powerful—and it’s the reason authoritarians work so viciously to take them away. But together, we can make sure technology remains a tool for the people. Become a monthly Sustaining Donor or give a one-time donation of any size by December 31 and unlock additional Year-End Challenge grants!

Give Today

Unlock Year-End Challenge Grants

Already an EFF Member? Help Us Spread the Word!

EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:


We need to stand together and ensure technology works for us, not against us. Donate any amount to EFF by Dec 31, and you'll help unlock challenge grants! https://eff.org/yec
Bluesky Facebook | LinkedIn | Mastodon
(more at eff.org/social)

_________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Maggie Kazmierczak

AI Police Reports: Year In Review

4 days 6 hours ago

In 2024, EFF wrote our initial blog about what could go wrong when police let AI write police reports. Since then, the technology has proliferated at a disturbing rate. Why? The most popular generative AI tool for writing police reports is Axon’s Draft One, and Axon also happens to be the largest provider of body-worn cameras to police departments in the United States. As we’ve written, companies are increasingly bundling their products to make it easier for police to buy more technology than they may need or that the public feels comfortable with. 

We have good news and bad news. 

Here’s the bad news: AI written police reports are still unproven, untransparent, and downright irresponsible–especially when the criminal justice system, informed by police reports, is deciding people’s freedom. The King County prosecuting attorney’s office in Washington state barred police from using AI to write police reports. As their memo read, “We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” 

In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that. 

In this video of a roundtable discussion about Draft One, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

Yikes! 

All of this obfuscation also makes it incredibly hard for people outside police departments to figure out if their city’s officers are using AI to write reports–and even harder to use public records requests to audit just those reports. That’s why this year EFF also put out a comprehensive guide to help the public make their records requests as tailored as possible to learn about AI-generated reports. 

Ok, now here’s the good news: People who believe AI-written police reports are irresponsible and potentially harmful to the public are fighting back. 

This year, two states have passed bills that are an important first step in reigning in AI police reports. Utah’s SB 180 mandates that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI. It also requires officers to certify that they checked the report for accuracy. California’s SB 524 went even further. It requires police to disclose, on the report, if it was used to fully or in part author a police report. Further, it bans vendors from selling or sharing the information a police agency provided to the AI. The bill also requires departments to retain the first draft of the report so that judges, defense attorneys, or auditors could readily see which portions of the final report were written by the officer and which portions were written by the computer.

In the coming year, anticipate many more states joining California and Utah in regulating, or perhaps even banning, police from using AI to write their reports. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Matthew Guariglia

The Fight Against Presidential Targeting of Law Firms: 2025 in Review

4 days 6 hours ago

The US legal profession was just one of the pillars of American democracy that was targeted in the early days of the second Trump administration. At EFF, we were proud to publicly and loudly support the legal profession and, most importantly, continue to do our work challenging the government’s erosion of digital rights—work that became even more critical as many law firms shied away from pro bono work.

For those that don’t know: pro bono work is work that for-profit law firms undertake for the public good. This usually means providing legal counsel to clients who desperately need but cannot afford it. It’s a vital practice, since non-profits like EFF don’t have the same capacity, resources, or expertise of a classic white shoe law firm. It’s mutually beneficial, actually, since law firms and non-profits have different experience and areas of expertise that can supplement each other’s work.

A little more than a month into the new administration, President Trump began retaliating against large law firms who supported had investigations against him or litigated against his interests, representing clients either challenging his policies during his first term or defending the outcome of the 2020 election among other cases. The retaliation quickly spread to other firmsfirms lost government contracts and had security clearances stripped from their lawyers. Twenty large law firm were threatened by the Equal Employment Opportunity Commission over their DEI policies. Individual lawyers were also targeted. The policy attacking the legal profession was memorialized as official policy in the March 22, 2025 presidential memo Preventing Abuses of the Legal System and the Federal Court.

Although many of the targeted firms shockingly and regrettably capitulated, a few law firms sued to undo the actions against them. EFF was eager to support them, joining amicus briefs in each case. Over 500 law firms across the country joined supportive amicus briefs as well.

We also thought it critically important to publicly state our support for the targeted law firms and to call out the administration’s actions as violating the rule of law. So we did. We actually expected numerous law firms and legal organizations to also issue statements. But no one else did. EFF was thus the very first non-targeted legal organization in the country, either law firm or nonprofit, to publicly oppose the administration’s attack on the independence of the legal profession. Fortunately, within the week, firms started to speak up as well. As did the American Bar Association.

In the meantime, EFF’s legal work has become even more critical as law firms have reportedly pulled back on their pro bono hours since the administration’s attacks. Indeed, recognizing the extraordinary need, we ramped up out litigation, including cases against the federal government, suing DOGE for stealing Americans’ data, the state department for chilling visa-holders’ speech by surveilling and threatening to surveil their social media posts, and seeking records of the administration’s demands to online platforms to remove ICE oversight apps.

And we’re going to keep on going in 2026 and beyond.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

David Greene

2025 in Review

4 days 6 hours ago

Each December we take a moment to publish a series of blog posts that look back at the things we’ve accomplishes in fighting for your rights and privacy the past 12 months. But this year I’ve been thinking not just about the past 12 months, but also over the past 25 years I’ve spent at EFF.  As many folks know, I’ve decided to pass the leadership torch and will leave EFF in 2026, so this will be the last time I write one of these annual reviews.  It’s bittersweet, but I’m filled with pride, especially about how we stick with fights over the long run.  

EFF has come a long way since I joined in 2000.  In so many ways, the work and reputation we have built laid the groundwork for years like 2025 – when freedom, justice and innovation were under attack from many directions at once with tech unfortunately at the center of many of them.  As a result, we launched our Take Back CRTL campaign to put the focus on fighting back. 

In addition to the specific issues we address in this year-end series of blog posts, EFF brought our legal expertise to several challenges to the Trump Administration’s attacks on privacy, free speech and security, including directly bringing two cases against the government and filing multiple amicus briefs in others.  In some ways, that’s not new: we’ve worked in the courts to hold the government accountable all the way back to our founding in1990.  

In this introductory blog post, however, I want to highlight two topics that attest to our long history of advocacy.  The first is our battle against the censorship and privacy nightmares that come from requirements that internet users to submit to  age verification. We’ve long known that age verification technologies, which aim to block young people from viewing or sharing information that the government deems “harmful” or “offensive,” end up becoming  tools of censorship.  They often rely on facial recognition and other techniques that have unacceptable levels of inaccuracy and that create security risks.  Ultimately, they are surveillance systems that chill access to vital online communities and resources, and burden the expressive rights of adults and young people alike. 

The second is automated license plate readers (ALPR), which serve as a mass surveillance network of our locations as we go about our day.  We sued over this technology in 2013, demanding public access to records about their use and ultimately won at the California Supreme Court.  But 2025 is the year that the general public began to understand just how much information is being collected and used by governments and private entities alike, and to recognize the dangers that causes. Our investigations team filed another public records requests, revealing racist searches done by police. And 12 years later after our first lawsuit, our lawyers filed another case, this time directly challenging the ALPR policies of San Jose, California.  In addition, our activists have been working with people in municipalities across the country who want to stop their city’s use of ALPR in their communities.  Groups in Austin, Texas, for example, worked hard to get their city to reject a new contract for these cameras.  

These are just two issues of many that have engaged our lawyers, activists, and technologists this year. But they show how we dig in for the long run and are ready when small issues become bigger ones.   

The more than 100 people who work at EFF spent this last year proving their mettle in battles, many of which are nowhere near finished. But we will push on, and when those issues breach public consciousness, we’ll be ready.    

We can only keep doggedly working on these issues year after year because of you, our members and supporters. You engage on these issues, you tell us when something is happening in your town, and your donations power everything we do. This may be my last end-of-the-year blog post, but thanks to you, EFF is here to stay.  We’re strong, we’re ready, and we know how to stick with things for the long run. Thanks for holding us up.    

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Cindy Cohn

Online Gaming’s Final Boss: The Copyright Bully

1 week 1 day ago

Since earliest days of computer games, people have tinkered with the software to customize their own experiences or share their vision with others. From the dad who changed the game’s male protagonist to a girl so his daughter could see herself in it, to the developers who got their start in modding, games have been a medium where you don’t just consume a product, you participate and interact with culture.

For decades, that participatory experience was a key part of one of the longest-running video games still in operation: Everquest. Players had the official client, acquired lawfully from EverQuest’s developers, and modders figured out how to enable those clients to communicate with their own servers and then modify their play experience – creating new communities along the way.

Everquest’s copyright owners implicitly blessed all this. But the current owners, a private equity firm called Daybreak, want to end that independent creativity. They are using copyright claims to threaten modders who wanted to customize the EverQuest experience to suit a different playstyle, running their own servers where things worked the way they wanted. 

One project in particular is in Daybreak’s crosshairs: “The Hero’s Journey” (THJ). Daybreak claims THJ has infringed its copyrights in Everquest visuals and character, cutting into its bottom line.

Ordinarily, when a company wants to remedy some actual harm, its lawyers will start with a cease-and-desist letter and potentially pursue a settlement. But if the goal is intimidation, a rightsholder is free to go directly to federal court and file a complaint. That’s exactly what Daybreak did, using that shock-and-awe approach to cow not only The Hero’s Journey team, but unrelated modders as well.

Daybreak’s complaint seems to have dazzled the judge in the case by presenting side-by-side images of dragons and characters that look identical in the base game and when using the mod, without explaining that these images are the ones provided by EverQuest’s official client, which players have lawfully downloaded from the official source. The judge wound up short-cutting the copyright analysis and issuing a ruling that has proven devastating to the thousands of players who are part of EverQuest modding communities.

Daybreak and the developers of The Hero’s Journey are now in private arbitration, and Daybreak has wasted no time in sending that initial ruling to other modders. The order doesn’t bind anyone who’s unaffiliated with The Hero’s Journey, but it’s understandable that modders who are in it for fun and community would cave to the implied threat that they could be next.

As a result, dozens of fan servers have stopped operating. Daybreak has also persuaded the maintainers of the shared server emulation software that most fan servers rely upon, EQEmulator, to adopt terms of service that essentially ban any but the most negligible modding. The terms also provide that “your operation of an EQEmulator server is subject to Daybreak’s permission, which it may revoke for any reason or no reason at any time, without any liability to you or any other person or entity. You agree to fully and immediately comply with any demand from Daybreak to modify, restrict, or shut down any EQEmulator server.” 

This is sadly not even an uncommon story in fanspaces—from the dustup over changes to the Dungeons and Dragons open gaming license to the “guidelines” issued by CBS for Star Trek fan films, we see new generations of owners deciding to alienate their most avid fans in exchange for more control over their new property. It often seems counterintuitive—fans are creating new experiences, for free, that encourage others to get interested in the original work.

Daybreak can claim a shameful victory: it has imposed unilateral terms on the modding community that are far more restrictive than what fair use and other user rights would allow. In the process, it is alienating the very people it should want to cultivate as customers: hardcore Everquest fans. If it wants fans to continue to invest in making its games appeal to broader audiences and serve as testbeds for game development and sources of goodwill, it needs to give the game’s fans room to breathe and to play.

If you’ve been a target of Daybreak’s legal bullying, we’d love to hear from you; email us at info@eff.org.

Corynne McSherry

Speaking Freely: Sami Ben Gharbia

1 week 1 day ago

Interviewer: Jillian York

Sami Ben Gharbia is a Tunisian human rights campaigner, blogger, writer and freedom of expression advocate. He founded Global Voices Advocacy, and is the co-founder and current publisher of the collective media organization Nawaat, which won the EFF Award in 2011

Jillian York: So first, what is your personal definition, or how do you conceptualize freedom of expression?

Sami Ben Gharbia: So for me, freedom of expression, it is mainly as a human. Like, I love the definition of Arab philosophers to human beings, we call it “speaking animal”. So that's the definition in logic, like the science of logic, meditated on by the Greeks, and that defines a human being as a speaking animal, which means later on. Descartes, the French philosopher, describes it like the Ergo: I think, so I am. So the act of speaking is an act of thinking, and it's what makes us human. So this is my definition that I love about freedom of expression, because it's the condition, the bottom line of our human being. 

JY: I love that. Is that something that you learned about growing up?

SBG: You mean, like, reading it or living?

JY: Yeah, how did you come to this knowledge?

SBG: I read a little bit of logics, like science of logic, and this is the definition that the Arabs give to define what is a human being; to differentiate us from, from plants or animals, or, I don't know, rocks, et cetera. So the humans are speaking, animals, 

JY: Oh, that's beautiful. 

SBG: And by speaking, it's in the Arabic definition of the word speaking, it's thinking. It's equal to thinking. 

JY: At what point, growing up, did you realize…what was the turning point for you growing up in Tunisia and realizing that protecting freedom of expression was important?

SBG: Oh, I think, I was born in 1967 and I grew up under an authoritarian regime of the “father” of this Tunisian nation, Bourghiba, the first president of Tunisia, who got us independence from France. And during the 80s, it was very hard to find even books that speak about philosophy, ideology, nationalism, Islamism, Marxism, etc. So to us, almost everything was forbidden. So you need to hide the books that you smuggle from France or from libraries from other cities, et cetera. You always hide what you are reading because you do not want to expose your identity, like you are someone who is politically engaged or an activist. So, from that point, I realized how important freedom of expression is, because if you are not allowed even to read or to buy or to exchange books that are deemed to be controversial or are so politically unacceptable under an authoritarian regime, that's where the fight for freedom of expression should be at the forefront of of any other fights. That's the fight that we need to engage in in order to secure other rights and freedoms.

JY: You speak a number of languages, at what point did you start reading and exploring other languages than the one that you grew up speaking?

SBG: Oh, I think, well, we learn Arabic, French and English in school, and like, primary school, secondary school, so these are our languages that we take from school and from our readings, etc, and interaction with other people in Tunisia. But my first experience living in a country that speaks another language that I didn't know was in Iran. So I spent, in total, one and a half years there in Iran, where I started to learn a fourth language that I really intended to use. It's not a Latin language. It is a special language, although they use almost the same letters and alphabet with some difference in pronunciation and writing, but but it was easy for an Arab speaking native Tunisian to learn Farsi due to the familiarity with the alphabets and familiarity with the pronunciation of most of the alphabet itself. So, that's the first case where I was confronted with a foreign language. It was Iran. And then during my exile in the Netherlands, I was confronted by another family of languages, which is Dutch from the family of Germanic languages, and that's the fifth language that I learned in the Netherlands. 

JY: Wow. And how do you feel that language relates to expression? For you?

SBG: I mean…language, it's another word. It's another universe. Because language carries culture, carries knowledge, carries history, customs. So it's a universe that is living. And once you learn to speak a new language, actually, you embrace another culture. You are more open in the way of understanding and accepting differences between other cultures, and I think that's how it makes your openness much more elastic. Like you accept other cultures more, other identities, and then you are not afraid anymore. You're not scared anymore from other identities, let's say, because I think the problem of civilization and crisis or conflict starts from ignorance—like we don't know the others, we don't know the language, we don't know the customs, the culture, the heritage, the history. That's why we are scared of other people. So the language is the first, let's say, window to other identity and acceptance of other people

JY: And how many languages do you speak now?

SBG: Oh, well, I don't know. Five for sure, but since I moved to exile a second time now, to Spain, I started learning Spanish, and I've been traveling a lot in Italy, started learning some some Italian, but it is confusing, because both are Latin languages, and they share a lot of words, and so it is confusing, but it is funny. I'm not that young to learn quickly, but I'm 58 years old, so it's not easy for someone my age to learn a new language quickly, especially when you are confused about languages from the same family as Latin.

JY: Oh, that's beautiful, though. I love that. All right, now I want to dig into the history of [2011 EFF Award winner] Nawaat. How did it start?

SBG: So Nawaat started as a forum, like in the early 2000s, even before the phenomena of blogs. Blogs started later on, maybe 2003-4, when they became the main tools for expression. Before that, we had forums where people debate ideas, anything. So it started as a forum, multiple forums hosted on the same domain name, which is Nawaat.org and little by little, we adopted new technology. We moved it. We migrated the database from from the forum to CMS, built a new website, and then we started building the website or the blog as a collective blog where people can express themselves freely, and in a political context where, similar to many other countries, a lot of people express themselves through online platforms because they are not allowed to express themselves freely through television or radio or newspaper or magazines in in their own country. 

So it started mainly as an exiled media. It wasn't journalistically oriented or rooted in journalism. It was more of a platform to give voices to the diaspora, mainly the exiled Tunisian diaspora living in exile in France and in England and elsewhere. So we published Human Rights Reports, released news about the situation in Tunisia. We supported the opposition in Tunisia. We produced videos to counter the propaganda machine of the former President Ben Ali, etc. So that's how it started and evolved little by little through the changing in the tech industry, from forums to blogs and then to CMS, and then later on to to adopt social media accounts and pages. So this is how it started and why we created it that like that was not my decision. It was a friend of mine, we were living in exile, and then we said, “why not start a new platform to support the opposition and this movement in Tunisia?” And that's how we did it at first, it was fun, like it was something like it was a hobby. It wasn't our work. I was working somewhere else, and he was working something else. It was our, let's say hobby or pastime. And little by little, it became our, our only job, actually.

JY: And then, okay, so let's come to 2011. I want to hear now your perspective 14 years later. What role do you really feel that the internet played in Tunisia in 2011?

SBG: Well, it was a hybrid tool for liberation, etc. We know the context of the internet freedom policy from the US we know, like the evolution of Western interference within the digital sphere to topple governments that are deemed not friendly, etc. So Tunisia was like, a friend of the West, very friendly with France and the United States and Europe. They loved the dictatorship in Tunisia, in a way, because it secured the border. It secured the country from, by then, the Islamist movement, et cetera. So the internet did play a role as a platform to spread information and to highlight the human rights abuses that are taking place in Tunisia and to counter the narrative that is being manipulated then by the government agency, state agency, public broadcast channel, television news agency, etc. 

And I think we managed it like the big impact of the internet and the blogs by then and platforms like now. We adopted English. It was the first time that the Tunisian opposition used English in its discourse, with the objective to bridge the gap between the traditional support for opposition and human rights in Tunisia that was mainly was coming from French NGOs and human rights organization towards international support, and international support that is not only coming from the traditional, usual suspects of Human Rights Watch, Amnesty International, Freedom House, et cetera. Now we wanted to broaden the spectrum of the support and to reach researchers, to reach activists, to reach people who are writing about freedom elsewhere. So we managed to break the traditional chain of support between human rights movements or organizations and human rights activists in Tunisia, and we managed to broaden that and to reach other people, other audiences that were not really touching what was going on in Tunisia, and I think that's how the Internet helped in the field of international support to the struggle in Tunisia and within Tunisia. 

The impact was, I think, important to raise awareness about human rights abuses in the country, so people who are not really politically knowledgeable about the situation due to the censorship and due to the problem of access to information which was lacking in Tunisia, the internet helped spread the knowledge about the situation and help speed the process of the unrest, actually. So I think these are the two most important impacts within the country, to broaden the spectrum of the people who are reached and targeted by the discourse of political engagement and activism, and the second is to speed the process of consciousness and then the action in the street. So this is how I think the internet helped. That's great, but it wasn't the main tool. I mean, the main tool was really people on the ground and maybe people who didn't have access to the internet at all.

JY: That makes sense. So what about the other work that you were doing around that time with the Arabloggers meetings and Global Voices and the Arab Techies network. Tell us about that.

SBG: Okay, so my position was the founding director of Global Voices Advocacy, I was hired to found this, this arm of advocacy within Global Voices. And that gave me the opportunity to understand other spheres, linguistic spheres, cultural spheres. So it was beyond Tunisia, beyond the Arab world and the region. I was in touch with activists from all over the world. I mean by activists, I mean digital activists, bloggers that are living in Latin America or in Asia or in Eastern Europe, et cetera, because one of the projects that I worked on was Threatened Voices, which was a map of all people who were targeted because of their online activities. That gave me the opportunity to get in touch with a lot of activists.

And then we organized the first advocacy meeting. It was in Budapest, and we managed to invite like 40 or 50 activists from all over the world, from China, Hong Kong, Latin America, the Arab world, Eastern Europe, and Africa. And that broadened my understanding of the freedom of expression movement and how technology is being used to foster human rights online, and then the development of blog aggregators in the world, and mainly in the Arab world, like, each country had its own blog aggregator. That helped me understand those worlds, as did Global Voices. Because Global Voices was bridging the gap between what is being written elsewhere, through the translation effort of Global Voices to the English speaking world and vice versa, and the role played by Global Voices and Global Voices Advocacy made the space and the distance between all those blogospheres feel very diminished. We were very close to the blogosphere movement in Egypt or in Morocco or in Syria and elsewhere. 

And that's how, Alaa Abd El Fattah and Manal Bahey El-Din Hassan and myself, we started thinking about how to establish the Arab Techies collective, because the needs that we identified—there was a gap. There was a lack of communication between pure techies, people who are writing code, building software, translating tools and even online language into Arabic, and the people who are using those tools. The bloggers, freedom of expression advocates, et cetera. And because there are some needs that were not really met in terms of technology, we thought that bringing these two words together, techies and activists would help us build new tools, translate new tools, make tools available to the broader internet activists. And that's how the Arab Techies collective was born in Cairo, and then through organizing the Arabloggers meetings two times in Beirut, and then the third in Tunisia, after the revolution. 

It was a momentum for us, because it, I think it was the first time in Beirut that we brought bloggers from all Arab countries, like it was like a dream that was really unimaginable but at a certain point, but we made that happen. And then what they call the Arab revolution happened, and we lost contact with each other, because everybody was really busy with his or her own country's affairs. So Ali was really fully engaged in Egypt myself, I came back to Tunisia and was fully engaged in Tunisia, so we lost contact, because all of us were having a lot of trouble in their own country. A lot of those bloggers, like who attended the Arab bloggers meetings, few of them were arrested, few of them were killed, like Bassel was in prison, people were in exile, so we lost that connection and those conferences that brought us together, but then we've seen SMEX like filling that gap and taking over the work that started by the Arab techies and the Arab bloggers conference.

JY: We did have the fourth one in 2014 in Amman. But it was not the same. Okay, moving forward, EFF recently published this blog post reflecting on what had just happened to Nawaat, when you and I were in Beirut together a few weeks ago. Can you tell me what happened?

SBG: What happened is that they froze the work of Nawaat. Legally, although the move wasn't legal, because for us, we were respecting the law in Tunisia. But they stopped the activity of Nawaat for one month. And this is according to an article from the NGO legal framework, that the government can stop the work of an NGO if the NGO doesn't respect certain legal conditions; for them Nawaat didn't provide enough documentation that was requested by the government, which is a total lie, because we always submit all documentation on time to the government. So they stopped us from doing our job, which is what we call in Tunisia, an associated media. 

It's not a company, it's not a business. It's not a startup. It is an NGO that is managing the website and the media, and now it has other activities, like we have the online website, the main website, but we also have a festival, which is a three day festival in our headquarters. We have offline debates. We bring actors, civil society, activists, politicians, to discuss important issues in Tunisia. We have a quality print magazine that is being distributed and sold in Tunisia. We have an innovation media incubation program where we support people to build projects through journalism and technology. So we have a set of offline projects that stopped for a month, and we also stopped publishing anything on the website and all our social media accounts. And now what? It's not the only one. They also froze the work of other NGOs, like the Tunisian Association of Democratic Women, which is really giving support to women in Tunisia. Also the Tunisian Forum for Social and Economic Rights, which is a very important NGO giving support to grassroots movements in Tunisia. And they stopped Aswat Nissa, another NGO that is giving support to women in Tunisia. So they targeted impactful NGOs. 

So now what? It's not an exception, and we are very grateful to the wave of support that we got from Tunisian fellow citizens, and also friendly NGOs like EFF and others who wrote about the case. So this is the context in which we are living, and we are afraid that they will go for an outright ban of the network in the future. This is the worst case scenario that we are preparing ourselves for, and we might face this fate of seeing it close its doors and stop all offline activities that are taking place in Tunisia. Of course, the website will remain. We need to find a way to keep on producing, although it will really be risky for our on-the-ground journalists and video reporters and newsroom team, but we need to find a solution to keep the website alive. As an exiled media it's a very probable scenario and approach in the future, so we might go back to our exile media model, and we will keep on fighting.

JY: Yes, of course. I'm going to ask the final question. We always ask who someone’s free speech hero is, but I’m going to frame it differently for you, because you're somebody who influenced a lot of the way that I think about these topics. And so who's someone that has inspired you or influenced your work?

SBG: Although I started before the launch of WikiLeaks, for me Julian Assange was the concretization of the radical transparency movement that we saw. And for me, he is one of the heroes that really shaped a decade of transparency journalism and impacted not only the journalism industry itself, like even the established and mainstream media, such as the New York Times, Washington Post, Der Spiegel, et cetera. Wikileaks partnered with big media, but not only with big media, also with small, independent newsrooms in the Global South. So for me, Julian Assange is an icon that we shouldn't forget. And he is an inspiration in the way he uses technology to to fight against big tech and state and spy agencies and war crimes.

Jillian C. York

Fair Use is a Right. Ignoring It Has Consequences.

1 week 2 days ago

Fair use is not just an excuse to copy—it’s a pillar of online speech protection, and disregarding it in order to lash out at a critic should have serious consequences. That’s what we told a federal court in Channel 781 News v. Waltham Community Access Corporation, our case fighting copyright abuse on behalf of citizen journalists.

Waltham Community Access Corporation (WCAC), a public access cable station in Waltham, Massachusetts, records city council meetings on video. Channel 781 News (Channel 781), a group of volunteers who report on the city council, curates clips from those recordings for its YouTube channel, along with original programming, to spark debate on issues like housing and transportation. WCAC sent a series of takedown notices under the Digital Millennium Copyright Act (DMCA), accusing Channel 781 of copyright infringement. That led to YouTube deactivating Channel 781’s channel just days before a critical municipal election. Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its takedown notices under an important but underutilized provision of the DMCA.

The DMCA gives copyright holders a powerful tool to take down other people’s content from platforms like YouTube. The “notice and takedown” process requires only an email, or filling out a web form, in order to accuse another user of copyright infringement and have their content taken down. And multiple notices typically lead to the target’s account being suspended, because doing so helps the platform avoid liability. There’s no court or referee involved, so anyone can bring an accusation and get a nearly instantaneous takedown.

Of course, that power invites abuse. Because filing a DMCA infringement notice is so easy, there’s a temptation to use it at the drop of a hat to take down speech that someone doesn’t like. To prevent that, before sending a takedown notice, a copyright holder has to consider whether the use they’re complaining about is a fair use. Specifically, the copyright holder needs to form a “good faith belief” that the use is not “authorized by the law,” such as through fair use.

WCAC didn’t do that. They didn’t like Channel 781 posting short clips from city council meetings recorded by WCAC as a way of educating Waltham voters about their elected officials. So WCAC fired off DMCA takedown notices at many of Channel 781’s clips that were posted on YouTube.

WCAC claims they considered fair use, because a staff member watched a video about it and discussed it internally. But WCAC ignored three of the four fair use factors. WCAC ignored that their videos had no creativity, being nothing more than records of public meetings. They ignored that the clips were short, generally including one or two officials’ comments on a single issue. They ignored that the clips caused WCAC no monetary or other harm, beyond wounded pride. And they ignored facts they already knew, and that are central to the remaining fair use factor: by excerpting and posting the clips with new titles, Channel 781 was putting its own “spin” on the material - in other words, transforming it. All of these facts support fair use.

Instead, WCAC focused only on the fact that the clips they targeted were not altered further or put into a larger program. Looking at just that one aspect of fair use isn’t enough, and changing the fair use inquiry to reach the result they wanted is hardly the way to reach a “good faith belief.”

That’s why we’re asking the court to rule that WCAC’s conduct violated the law and that they should pay damages. Copyright holders need to use the powerful DMCA takedown process with care, and when they don’t, there needs to be consequences.

Mitch Stoltz

Stand Together to Protect Democracy

1 week 2 days ago

What a year it’s been. We’ve seen technology unfortunately misused to supercharge the threats facing democracy: dystopian surveillance, attacks on encryption, and government censorship. These aren’t abstract dangers. They’re happening now, to real people, in real time.

EFF’s lawyers, technologists, and activists are pushing back. But we need you in this fight.

JOIN EFF TODAY!

MAKE A YEAR END DONATION—HELP EFF UNLOCK CHALLENGE GRANTS!

If you donate to EFF before the end of 2025, you’ll help fuel the legal battles that defend encryption, the tools that protect privacy, and the advocacy that stops dangerous laws—and you’ll help unlock up to $26,200 in challenge grants. 

📣 Stand Together: That's How We Win 📣

The past year confirmed how urgently we need technologies that protect us, not surveil us. EFF has been in the fight every step of the way, thanks to support from people like you.

Get free gear when you join EFF!

This year alone EFF:

  • Launched a resource hub to help users understand and fight back against age verification laws.
  • Challenged San Jose's unconstitutional license plate reader database in court.
  • Sued demanding answers when ICE spotting apps were mysteriously taken offline.
  • Launched Rayhunter to detect cell site simulators.
  • Pushed back hard against the EU's Chat Proposal that would break encryption for millions.

After 35 years of defending digital freedoms, we know what's at stake: we must protect your ability to speak freely, organize safely, and use technology without surveillance.

We have opportunities to win these fights, and you make each victory possible. Donate to EFF by December 31 and help us unlock additional grants this year!

Already an EFF Member? Help Us Spread the Word!

EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:


We need to stand together and ensure technology works for us, not against us. Donate any amount to EFF by Dec 31, and you'll help unlock challenge grants! https://eff.org/yec
Bluesky | Facebook | LinkedIn | Mastodon
(more at eff.org/social)

_________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Christian Romero

Local Communities Are Winning Against ALPR Surveillance—Here’s How: 2025 in Review

1 week 3 days ago

Across ideologically diverse communities, 2025 campaigns against automated license plate reader (ALPR) surveillance kept winning. From Austin, Texas to Cambridge, Massachusetts to Eugene, Oregon, successful campaigns combined three practical elements: a motivated political champion on city council, organized grassroots pressure from affected communities, and technical assistance at critical decision moments.

The 2025 Formula for Refusal

  • Institutional Authority: Council members leveraging "procurement power"—local democracy's most underutilized tool—to say no. 
  • Community Mobilization: A base that refuses to debate "better policy" and demands "no cameras." 
  • Shared Intelligence: Local coalitions utilizing shared research on contract timelines and vendor breaches.
Practical Wins Over Perfect Policies

In 2025, organizers embraced the "ugly" win: prioritizing immediate contract cancellations over the "political purity" of perfect privacy laws. Procurement fights are often messy, bureaucratic battles rather than high-minded legislative debates, but they stop surveillance where it starts—at the checkbook. In Austin, more than 30 community groups built a coalition that forced a contract cancellation, achieving via purchasing power what policy reform often delays. 

In Hays County, Texas, the victory wasn't about a new law, but a contract termination. Commissioner Michelle Cohen grounded her vote in vendor accountability, explaining: "It's more about the company's practices versus the technology." These victories might lack the permanence of a statute, but every camera turned off built a culture of refusal that made the next rejection easier. This was the organizing principle: take the practical win and build on it.

Start with the Harm

Winning campaigns didn't debate technical specifications or abstract privacy principles. They started with documented harms that surveillance enabled. EFF's research showing police used Flock's network to track Romani people with discriminatory search terms, surveil women seeking abortion care, and monitor protesters exercising First Amendment rights became the evidence organizers used to build power.

In Olympia, Washington, nearly 200 community members attended a counter-information rally outside city hall on Dec. 2. The DeFlock Olympia movement countered police department claims point-by-point with detailed citations about data breaches and discriminatory policing. By Dec. 3, cameras had been covered pending removal.

In Cambridge, the city council voted unanimously in October to pause Flock cameras after residents, the ACLU of Massachusetts, and Digital Fourth raised concerns. When Flock later installed two cameras "without the city's awareness," a city spokesperson  called it a "material breach of our trust" and terminated the contract entirely. The unexpected camera installation itself became an organizing moment.

The Inside-Outside Game

The winning formula worked because it aligned different actors around refusing vehicular mass surveillance systems without requiring everyone to become experts. Community members organized neighbors and testified at hearings, creating political conditions where elected officials could refuse surveillance and survive politically. Council champions used their institutional authority to exercise "procurement power": the ability to categorically refuse surveillance technology.

To fuel these fights, organizers leveraged technical assets like investigation guides and contract timeline analysis. This technical capacity allowed community members to lead effectively without needing to become policy experts. In Eugene and Springfield, Oregon, Eyes Off Eugene organized sustained opposition over months while providing city council members political cover to refuse. "This is [a] very wonderful and exciting victory," organizer Kamryn Stringfield said. "This only happened due to the organized campaign led by Eyes Off Eugene and other local groups."

Refusal Crosses Political Divides

A common misconception collapsed in 2025: that surveillance technology can only be resisted in progressive jurisdictions. San Marcos, Texas let its contract lapse after a 3-3 deadlock, with Council Member Amanda Rodriguez questioning whether the system showed "return on investment." Hays County commissioners in Texas voted to terminate. Small towns like Gig Harbor, Washington rejected proposals before deployment. 

As community partners like the Rural Privacy Coalition emphasize, "privacy is a rural value." These victories came from communities with different political cultures but shared recognition that mass surveillance systems weren't worth the cost or risk regardless of zip code.

Communities Learning From Each Other

In 2025, communities no longer needed to build expertise from scratch—they could access shared investigation guides, learn from victories in neighboring jurisdictions, and connect with organizers who had won similar fights. When Austin canceled its contract, it inspired organizing across Texas. When Illinois Secretary of State's audit revealed illegal data sharing with federal immigration enforcement, Evanston used those findings to terminate 19 cameras.

The combination of different forms of power—institutional authority, community mobilization, and shared intelligence—was a defining feature of this year's most effective campaigns. By bringing these elements together, community coalitions have secured cancellations or rejections in nearly two dozen jurisdictions since February, building the infrastructure to make the next refusal easier and the movement unstoppable.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Sarah Hamid

Fighting to Keep Bad Patents in Check: 2025 in Review

1 week 3 days ago

A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges. 

Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents. 

EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.

Congress Weighed Bills That Would Undo Core Safeguards

The Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.

Together, these bills would have dismantled much of the progress made over the last decade. 

We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.” 

EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it. 

Patent Office Shifts To An “Era of No”

Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.

Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.

2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office. 

The October Rulemaking: Making Bad Patents Untouchable

The USPTO’s proposed rule changes would: 

  • Force defendants to surrender their court defenses if they use IPR—an intense burden for anyone actually facing a lawsuit. 
  • Make patents effectively unchallengeable after a single prior dispute, even if that challenge was limited, incomplete, or years out of date.
  • Block IPR entirely if a district court case is projected to move faster than the Patent Trial and Appeal Board (PTAB). 

These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back. 

We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it

In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.

The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.

GitHub emphasized the increased risk and litigation cost for open-source communities.

Twenty-two patent law professors called the proposal unlawful and harmful to innovation.

Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.

Heading Into 2026

The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.

But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Joe Mullin

The Breachies 2025: The Worst, Weirdest, Most Impactful Data Breaches of the Year

1 week 3 days ago

Another year has come and gone, and with it, thousands of data breaches that affect millions of people. The question these days is less, Is my information in a data breach this year? and more How many data breaches had my information in them this year? 

Some data breaches are more noteworthy than others. Where one might affect a small number of people and include little useful information, like a name or email address, others might include data ranging from a potential medical diagnosis to specific location information. To catalog and talk about these breaches we created the Breachies, a series of tongue-in-cheek awards, to highlight the most egregious data breaches. 

In most cases, if these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data. Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. It has become such a common occurrence that it’s easy to lose track of which breaches affect you, and just assume your information is out there somewhere. Still, a few steps can help protect your information.

With that, let’s get to the awards.

The Winners

The Say Something Without Saying Anything Award: Mixpanel

We’ve long warned that apps delivering your personal information to third-parties, even if they aren’t the ad networks directly driving surveillance capitalism, presents risks and a salient target for hackers. The more widespread your data, the more places attackers can go to find it. Mixpanel, a data analytics company which collects information on users of any app which incorporates its SDK, suffered a major breach in November this year. The service has been used by a wide array of companies, including the Ring Doorbell App, which we reported on back in 2020 delivering a trove of information to Mixpanel, and PornHub, which despite not having worked with the company since 2021, had its historical record of paying subscribers breached.    

There’s a lot we still don’t know about this data breach, in large part because the announcement about it is so opaque, leaving reporters with unanswered questions about how many were affected, if the hackers demanded a ransom, and if Mixpanel employee accounts utilized standard security best practices. One thing is clear, though: the breach was enough for OpenAI to drop them as a provider, disclosing critical details on the breach in a blog post that Mixpanel’s own announcement conveniently failed to mention.

The worst part is that, as a data analytics company providing libraries which are included in a broad range of apps, we can surmise that the vast majority of people affected by this breach have no direct relationship with Mixpanel, and likely didn’t even know that their devices were delivering data to the company. These people deserve better than vague statements by companies which profit off of (and apparently insufficiently secure) their data.

The We Still Told You So Award: Discord

Last year, AU10TIX won our first The We Told You So Award because as we predicted in 2023, age verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits. Like clockwork, they did. It was our first We Told You So Breachies award, but we knew it wouldn’t be the last. 

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy

Nonetheless, this year’s winner of The We Still Told You So Breachies Award is the messaging app, Discord — once known mainly for gaming communities, it now hosts more than 200 million monthly active users and is widely used to host fandom and community channels. 

In September of this year, much of Discord’s age verification data was breached — including users’ real names, selfies, ID documents, email and physical addresses, phone numbers, IP addresses, and other contact details or messages provided to customer support. In some cases, “limited billing information” was also accessed—including payment type, the last four digits of credit card numbers, and purchase histories. 

Technically though, it wasn’t Discord itself that was hacked but their third-party customer support provider — a company called Zendesk—that was compromised, allowing attackers to access Discord’s user data. Either way, it’s Discord users who felt the impact. 

The Tea for Two Award: Tea Dating Advice and TeaOnHer

Speaking of age verification, Tea, the dating safety app for women, had a pretty horrible year for data breaches. The app allows users to anonymously share reviews and safety information about their dates with men—helping keep others safe by noting red flags they saw during their date.

Since Tea is aimed at women’s safety and dating advice, the app asks new users to upload a selfie or photo ID to verify their identity and gender to create an account. That’s some pretty sensitive information that the app is asking you to trust it with! Back in July, it was reported that 72,000 images had been leaked from the app, including 13,000 images of photo IDs and 59,000 selfies. These photos were found via an exposed database hosted on Google’s mobile app development platform, Firebase. And if that isn’t bad enough, just a week later a second breach exposed private messages between users, including messages with phone numbers, abortion planning, and discussions about cheating partners. This breach included more than 1.1 million messages from early 2023 all the way to mid-2025, just before the breach was reported. Tea released a statement shortly after, temporarily disabling the chat feature.

But wait, there’s more. A completely different app based on the same idea, but for men, also suffered a data breach. TeaOnHer failed to protect similar sensitive data. In August, TechCrunch discovered that user information — including emails, usernames, and yes, those photo IDs and selfies — was accessible through a publicly available web address. Even worse? TechCrunch also found the email address and password the app’s creator uses to access the admin page.

Breaches like this are one of the reasons that EFF shouts from the rooftops against laws that mandate user verification with an ID or selfie. Every company that collects this information becomes a target for data breaches — and if a breach happens, you can’t just change your face. 

The Just Stop Using Tracking Tech Award: Blue Shield of California

Another year, another data breach caused by online tracking tools. 

In April, Blue Shield of California revealed that it had shared 4.7 million people’s health data with Google by misconfiguring Google Analytics on its website. The data, which may have been used for targeted advertising, included: people’s names, insurance plan details, medical service providers, and patient financial responsibility. The health insurance company shared this information with Google for nearly three years before realizing its mistake.

If this data breach sounds familiar, it’s because it is: last year’s Just Stop Using Tracking Tech award also went to a healthcare company that leaked patient data through tracking code on its website. Tracking tools remain alarmingly common on healthcare websites, even after years of incidents like this one. These tools are marketed as harmless analytics or marketing solutions, but can expose people’s sensitive data to advertisers and data brokers. 

EFF’s free Privacy Badger extension can block online trackers, but you shouldn’t need an extension to stop companies from harvesting and monetizing your medical data. We need a strong, federal privacy law and ban on online behavioral advertising to eliminate the incentives driving companies to keep surveilling us online. 

The Hacker's Hall Pass Award: PowerSchool

 In December 2024, PowerSchool, the largest provider of student information systems in the U.S., gave hackers access to sensitive student data. The breach compromised personal information of over 60 million students and teachers, including Social Security numbers, medical records, grades, and special education data. Hackers exploited PowerSchool’s weak security—namely, stolen credentials to their internal customer support portal—and gained unfettered access to sensitive data stored by school districts across the country.

PowerSchool failed to implement basic security measures like multi-factor authentication, and the breach affected districts nationwide. In Texas alone, over 880,000 individuals’ data was exposed, prompting the state's attorney general to file a lawsuit, accusing PowerSchool of misleading its customers about security practices. Memphis-Shelby County Schools also filed suit, seeking damages for the breach and the cost of recovery.

While PowerSchool paid hackers an undisclosed sum to prevent data from being published, the company’s failure to protect its users’ data raises serious concerns about the security of K-12 educational systems. Adding to the saga, a Massachusetts student, Matthew Lane, pleaded guilty in October to hacking and extorting PowerSchool for $2.85 million in Bitcoin. Lane faces up to 17 years in prison for cyber extortion and aggravated identity theft, a reminder that not all hackers are faceless shadowy figures — sometimes they’re just a college kid.

The Worst. Customer. Service. Ever. Award: TransUnion

Credit reporting giant TransUnion had to notify its customers this year that a hack nabbed the personal information of 4.4 million people. How'd the attackers get in? According to a letter filed with the Maine Attorney General's office obtained by TechCrunch, the problem was a “third-party application serving our U.S. consumer support operations.” That's probably not the kind of support they were looking for. 

TransUnion said in a Texas filing that attackers swept up “customers’ names, dates of birth, and Social Security numbers” in the breach, though it was quick to point out in public statements that the hackers did not access credit reports or “core credit data.” While it certainly could have been worse, this breach highlights the many ways that hackers can get their hands on information. Coming in through third-parties, companies that provide software or other services to businesses, is like using an unguarded side door, rather than checking in at the front desk. Companies, particularly those who keep sensitive personal information, should be sure to lock down customer information at all the entry points. After all, their decisions about who they do business with ultimately carry consequences for all of their customers — who have no say in the matter.

The Annual Microsoft Screwed Up Again Award: Microsoft

Microsoft is a company nobody feels neutral about. Especially in the infosec world. The myriad software vulnerabilities in Windows, Office, and other Microsoft products over the decades has been a source of frustration and also great financial rewards for both attackers and defenders. Yet still, as the saying goes: “nobody ever got fired for buying from Microsoft.” But perhaps, the times, they are a-changing. 

In July 2025, it was revealed that a zero-day security vulnerability in Microsoft’s flagship file sharing and collaboration software, SharePoint, had led to the compromise of over 400 organizations, including major corporations and sensitive government agencies such as the National Nuclear Security Administration (NNSA), the federal agency responsible for maintaining and developing the U.S. stockpile of nuclear weapons. The attack was attributed to three different Chinese government linked hacking groups. Amazingly, days after the vulnerability was first reported, there were still thousands of vulnerable self-hosted Sharepoint servers online. 

Zero-days happen to tech companies, large and small. It’s nearly impossible to write even moderately complex software that is bug and exploit free, and Microsoft can’t exactly be blamed for having a zero-day in their code. But when one company is the source of so many zero-days consistently for so many years, one must start wondering whether they should put all their eggs (or data) into a basket that company made. Perhaps if Microsoft’s monopolistic practices had been reined in back in the 1990s we wouldn’t be in a position today where Sharepoint is the defacto file sharing software for so many major organizations. And maybe, just maybe, this is further evidence that tech monopolies and centralization of data aren’t just bad for consumer rights, civil liberties, and the economy—but also for cybersecurity. 

The Silver Globe Award: Flat Earth Sun, Moon & Zodiac

Look, we’ll keep this one short: in October of last year, researchers found security issues in the flat earther app, Flat Earth, Sun, Moon, & Clock. In March of 2025, that breach was confirmed. What’s most notable about this, aside from including a surprising amount of information about gender, name, email addresses and date of birth, is that it also included users’ location info, including latitude and longitude. Huh, interesting.

The I Didn’t Even Know You Had My Information Award: Gravy Analytics

In January, hackers claimed they stole millions of people’s location history from a company that never should’ve had it in the first place: location data broker Gravy Analytics. The data included timestamped location coordinates tied to advertising IDs, which can reveal exceptionally sensitive information. In fact, researchers who reviewed the leaked data found it could be used to identify military personnel and gay people in countries where homosexuality is illegal

The breach of this sensitive data is bad, but Gravy Analytics’s business model of regularly harvesting and selling it is even worse. Despite the fact that most people have never heard of them, Gravy Analytics has managed to collect location information from a billion phones a day. The company has sold this data to other data brokers, makers of police surveillance tools, and the U.S. government

How did Gravy Analytics get this location information from people’s phones? The data broker industry is notoriously opaque, but this breach may have revealed some of Gravy Analytics’ sources. The leaked data referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. Many of these app developers said they had no relationship with Gravy Analytics. Instead, expert analysis of the data suggests it was harvested through the advertising ecosystem already connected to most apps. This breach provides further evidence that online behavioral advertising fuels the surveillance industry

Whether or not they get hacked, location data brokers like Gravy Analytics threaten our privacy and security. Follow EFF’s guide to protecting your location data and help us fight for legislation to dismantle the data broker industry. 

The Keeping Up With My Cybertruck Award: Teslamate

TeslaMate, a tool meant to track Tesla vehicle data (but which is not owned or operated by Tesla itself), has become a cautionary tale about data security. In August, a security researcher found more than 1,300 self-hosted TeslaMate dashboards were exposed online, leaking sensitive information such as vehicle location, speed, charging habits, and even trip details. In essence, your Cybertruck became the star of its own Keeping Up With My Cybertruck reality show, except the audience wasn’t made up of fans interested in your lifestyle, just random people with access to the internet.

TeslaMate describes itself as “that loyal friend who never forgets anything!” — but its lack of proper security measures makes you wish it would. This breach highlights how easily location data can become a tool for harassment or worse, and the growing need for legislation that specifically protects consumer location data. Without stronger regulations around data privacy, sensitive location details like where you live, work, and travel can easily be accessed by malicious actors, leaving consumers with no recourse.

The Disorder in the Courts Award: PACER

Confidentiality is a core principle in the practice of law. But this year a breach of confidentiality came from an unexpected source: a breach of the federal court filing system. In August, Politico reported that hackers infiltrated the Case Management/Electronic Case Files (CM/ECF) system, which uses the same database as PACER, a searchable public database for court records. Of particular concern? The possibility that the attack exposed the names of confidential informants involved in federal cases from multiple court districts. Courts across the country acted quickly to set up new processes to avoid the possibility of further compromises.

The leak followed a similar incident in 2021 and came on the heels of a warning to Congress that the file system is more than a little creaky. In fact, an IT official from the federal court system told the House Judiciary Committee that both systems are “unsustainable due to cyber risks, and require replacement.”

The Only Stalkers Allowed Award: Catwatchful

Just like last year, a stalkerware company was subject to a data breach that really should prove once and for all that these companies must be stopped. In this case, Catwatchful is an Android spyware company that sells itself as a “child monitoring app.” Like other products in this category, it’s designed to operate covertly while uploading the contents of a victim’s phone, including photos, messages, and location information.

This data breach was particularly harmful, as it included not just the email addresses and passwords on the customers who purchased the app to install on a victim’s phone, but also the data from the phones of 26,000 victims’ devices, which could include the victims’ photos, messages, and real-time location data.

This was a tough award to decide on because Catwatchful wasn’t the only stalkerware company that was hit this year. Similar breaches to SpyX, Cocospy, and Spyic were all strong contenders. EFF has worked tirelessly to raise the alarm on this sort of software, and this year worked with AV Comparatives to test the stalkerware detection rate on Android of various major antivirus apps.

The Why We’re Still Stuck on Unique Passwords Award: Plex

Every year, we all get a reminder about why using unique passwords for all our accounts is crucial for protecting our online identities. This time around, the award goes to Plex, who experienced a data breach that included customer emails, usernames, and hashed passwords (which is a fancy way of saying passwords are scrambled through an algorithm, but it is possible they could still be deciphered).

If this all sounds vaguely familiar to you for some reason, that’s because a similar issue also happened to Plex in 2022, affecting 15 million users. Whoops.

This is why it is important to use unique passwords everywhereA password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. Here’s how to turn that on for your Plex account.

The Uh, Yes, Actually, I Have Been Pwned Award: Troy Hunt’s Mailing List

Troy Hunt, the person behind Have I Been Pwned? and who has more experience with data breaches than just about anyone, also proved that anyone can be pwned. In a blog post, he details what happened to his mailing list:

You know when you're really jet lagged and really tired and the cogs in your head are just moving that little bit too slow? That's me right now, and the penny has just dropped that a Mailchimp phish has grabbed my credentials, logged into my account and exported the mailing list for this blog.

And he continues later:

I'm enormously frustrated with myself for having fallen for this, and I apologise to anyone on that list. Obviously, watch out for spam or further phishes and check back here or via the social channels in the nav bar above for more.

The whole blog is worth a read as a reminder that phishing can get anyone, and we thank Troy Hunt for his feedback on this and other breaches to include this year.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Delete old accounts: Sometimes, you’ll get a data breach notification for an account you haven’t used in years. This can be a nice reminder to delete that account, but it’s better to do so before a data breach happens, when possible. Try to make it a habit to go through and delete old accounts once a year or so. 
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 
(Dis)Honorable Mentions

According to one report, 2025 had already seen 2,563 data breaches by October, which puts the year on track to be one of the worst by the sheer number of breaches.

We did not investigate every one of these 2,500-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachies Award to every company that was breached this year. Still, here are some (dis)honorable mentions we wanted to highlight:

Salesforce, F5, Oracle, WorkComposer, Raw, Stiizy, Ohio Medical Alliance LLC, Hello Cake, Lovense, Kettering Health, LexisNexis, WhatsApp, Nexar, McDonalds, Congressional Budget Office, Doordash, Louis Vuitton, Adidas, Columbia University, Hertz, HCRG Care Group, Lexipol, Color Dating, Workday, Aflac, and Coinbase. And a special nod to last minute entrants Home Depot, 700Credit, and Petco.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Thorin Klosowski

🪪 Age Verification Is Coming for the Internet | EFFector 37.18

1 week 4 days ago

The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.

In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.

Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

States Take On Tough Tech Policy Battles: 2025 in Review

1 week 4 days ago

State legislatures—from Olympia, WA, to Honolulu, HI, to Tallahassee, FL, and everywhere in between—kept EFF’s state legislative team busy throughout 2025.

We saw some great wins and steps forward this year. Washington became the eighth state to enshrine the right to repair. Several states stepped up to protect the privacy of location data, with bills recognizing your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. Other state legislators moved to protect health privacy. And California passed a law making it easier for people to exercise their privacy rights under the state’s consumer data privacy law.

Several states also took up debates around how to legislate and regulate artificial intelligence and its many applications. We’ll continue to work with allies in states including California and Colorado to proposals that address the real harms from some uses of AI, without infringing on the rights of creators and individual users.

We’ve also fought some troubling bills in states across the country this year. In April, Florida introduced a bill that would have created a backdoor for law enforcement to have easy access to messages if minors use encrypted platforms. Thankfully, the Florida legislature did not pass the bill this year. But it should set off serious alarm bells for anyone who cares about digital rights. And it was just one of a growing set of bills from states that, even when well-intentioned, threaten to take a wrecking ball to privacy, expression, and security in the name of protecting young people online.

Take, for example, the burgeoning number of age verification, age gating, age assurance, and age estimation bills. Instead of making the internet safer for children, these laws can incentivize or intersect with existing systems that collect vast amounts of data to force all users—regardless of age—to verify their identity just to access basic content or products. South Dakota and Wyoming, for example, are requiring any website that hosts any sexual content to implement age verification measures. But, given the way those laws are written, that definition could include essentially any site that allows user-generated or published content without age-based gatekeeping access. That could include everyday resources such as social media networks, online retailers, and streaming platforms.

Lawmakers, not satisfied with putting age gates on the internet, are also increasingly going after VPNs (virtual private networks) to prevent anyone from circumventing these new digital walls. VPNs are not foolproof tools—and they shouldn’t be necessary to access legally protected speech—but they should be available to people who want to use them. We will continue to stand against these types of bills, not just for the sake of free expression, but to protect the free flow of information essential to a free society.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Hayley Tsukayama

Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review

1 week 5 days ago

State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim. 

Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online. 

These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression. 

On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices. 

Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics

EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

India McKinney

Trends to Watch in the California Legislature

1 week 5 days ago

If you’re a Californian, there are a few new state laws that you should know will be going into effect in the new year. EFF has worked hard in Sacramento this session to advance bills that protect privacy, fight surveillance, and promote transparency.

California’s legislature runs in a two-year cycle, meaning that it’s currently halftime for legislators. As we prepare for the next year of the California legislative session in January, it’s a good time to showcase what’s happened so far—and what’s left to do.

Wins Worth Celebrating

In a win for every Californian’s privacy rights, we were happy to support A.B. 566 (Assemblymember Josh Lowenthal). This is a common-sense law that makes California’s main consumer data privacy law, the California Consumer Privacy Act, more user-friendly. It requires that browsers support people’s rights to send opt-out signals, such as the global opt-out in Privacy Badger, to businesses. Managing your privacy as an individual can be a hard job, and EFF wants stronger laws that make it easier for you to do so.

Additionally, we were proud to advance government transparency by supporting A.B. 1524 (Judiciary Committee), which allows members of the public to make copies of public court records using their own devices, such as cell-phone cameras and overhead document scanners, without paying fees.

We also supported two bills that will improve law enforcement accountability at a time when we desperately need it. S.B. 627 (Senator Scott Wiener) prohibits law enforcement officers from wearing masks to avoid accountability (The Trump administration has sued California over this law). We also supported S.B. 524 (Asm. Jesse Arreguín), which requires law enforcement to disclose when a police report was written using artificial intelligence.

On the To-Do List for Next Year

On the flip side, we also stopped some problematic bills from becoming law. This includes S.B. 690 (Sen. Anna Caballero), which we dubbed the Corporate Coverup Act. This bill would have gutted California’s wiretapping statute by allowing businesses to ignore those privacy rights for “any business purpose.” Working with several coalition partners, we were able to keep that bill from moving forward in 2025. We do expect to see it come back in 2026, and are ready to fight back against those corporate business interests.

And, of course, not every fight ended in victory. There are still many areas where we have work left to do. California Governor Gavin Newsom vetoed a bill we supported, S.B. 7, which would have given workers in California greater transparency into how their employers use artificial intelligence and was sponsored by the California Federation of Labor Unions. S.B. 7  was vetoed in response to concerns from companies including Uber and Lyft, but we expect to continue working with the labor community on the ways AI affects the workplace in 2026.

Trends of Note

California continued a troubling years-long trend of lawmakers pushing problematic proposals that would require every internet user to verify their age to access information—often by relying on privacy-invasive methods to do so. Earlier this year EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online. We continue to raise these concerns, and would welcome working with any lawmaker in California on a better solution.

We also continue to keep a close eye on government data sharing. On this front, there is some good news. Several of the bills we supported this year sought to place needed safeguards on the ways various government agencies in California share data. These include: A.B. 82 (Asm. Chris Ward) and S.B. 497 (Wiener), which would add privacy protections to data collected by the state about those who may be receiving gender-affirming or reproductive health care; A.B. 1303 (Asm. Avelino Valencia), which prohibits warrantless data sharing from California’s low-income broadband program to immigration and other government officials; and S.B. 635 (Sen. Maria Elena Durazo), which places similar limits on data collected from sidewalk vendors.

We are also heartened to see California correct course on broad government data sharing. Last session, we opposed A.B. 518 (Asm. Buffy Wicks), which let state agencies ignore existing state privacy law to allow broader information sharing about people eligible for CalFresh—the state’s federally funded food assistance program. As we’ve seen, the federal government has since sought data from food assistance programs to use for other purposes. We were happy to have instead supported A.B. 593 this year, also authored by Asm. Wicks—which reversed course on that data sharing.

We hope to see this attention to the harms of careless government data sharing continue. EFF’s sponsored bill this year, A.B. 1337, would update and extend vital privacy safeguards present at the state agency level to counties and cities. These local entities today collect enormous amounts of data and administer programs that weren’t contemplated when the original law was written in 1977. That information should be held to strong privacy standards.

We’ve been fortunate to work with Asm. Chris Ward, who is also the chair of the LGBTQ Caucus in the legislature, on that bill. The bill stalled in the Senate Judiciary Committee during the 2025 legislative session, but we plan to bring it back in the next session with a renewed sense of urgency.

Hayley Tsukayama

Age Verification Threats Across the Globe: 2025 in Review

1 week 5 days ago

Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space. 

The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented. 

Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.

In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:

Social Media Bans for Young People

Banning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight

Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.

Push for Age Assurance on All Users 

This year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.

We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.

Rushed Age Assurance through the EU Digital Wallet

There is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.

These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online. 

However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.  

Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety

Ways Forward

The internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. 

Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Paige Collings

Defending Access to Abortion Information Online: 2025 in Review

1 week 5 days ago

As reproductive rights face growing attacks globally, access to content about reproductive healthcare and abortion online has never been more critical. The internet has essential information on topics like where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Reproductive rights activists use the internet to organize and build community, and healthcare providers rely on it to distribute accurate information to people in need. And for those living in one of the 20+ states where abortion is banned or heavily restricted, the internet is often the only place to find these potentially life-saving resources.  

Nonetheless, both the government and private platforms are increasingly censoring abortion-related speech, at a time when we need it most. Anti-abortion legislators are actively trying to pass laws to limit online speech about abortion, making it harder to share critical resources, discuss legal options, seek safe care, and advocate for reproductive rights. At the same time, social media platforms have increasingly cracked down on abortion-related content, leading to the suppression, shadow-banning, and outright removal of posts and accounts.  

This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.   

As defenders of free expression and access to information online, we have a role to play in understanding where and how this is happening, shining a light on practices that endanger these rights, and taking action to ensure they’re protected. This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.   

Exposing Social Media Censorship 

At the start of 2025, we launched the #StopCensoringAbortion campaign to collect and spotlight the growing number of stories from users that have had abortion-related content censored by social media platforms. Our goal was to better understand how and why this is happening, raise awareness, and hold the platforms accountable.  

Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and influencers around the world, we confirmed what many already suspected: this speech is being removed and restricted by platforms at an alarming rate. Across the submissions we received, we saw a pattern of over enforcement, lack of transparency, and arbitrary moderation decisions aimed at reproductive health and reproductive justice advocates.  

Notably, almost none of the submissions we reviewed actually violated the platforms’ stated policies. The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But the content being removed wasn’t selling medications. Most of the censored posts simply provided factual, educational information—content that’s expressly allowed by Meta.  

In a month-long 10-part series, we broke down our findings. We examined the trends we saw, including stories of individuals and organizations who needed to rely on internal connections at Meta to get wrongfully censored posts restored, examples of account suspensions without sufficient warnings, and an exploration of Meta policies and how they are wrongly applied. We provided practical tips for users to protect their posts from being removed, and we called on platforms to adopt steps to ensure transparency, a functional appeals process, more human review of posts, and consistent and fair enforcement of rules.  

Social media platforms have a First Amendment right to curate the content on their sites—they can remove whatever content they want—and we recognize that. But companies like Meta claim they care about free speech, and their policies explicitly claim to allow educational information and discussions about abortion. We think they have a duty to live up to those promises. Our #StopCensoringAbortion campaign clearly shows that this isn’t happening and underscores the urgent need for platforms to review and consistently enforce their policies fairly and transparently.  

Combating Legislative Attacks on Free Speech  

On top of platform censorship, lawmakers are trying to police what people can say and see about abortion online. So in 2025, we also fought against censorship of abortion information on the legislative front.  

EFF opposed Texas Senate Bill (S.B.) 2880, which would not only outlaw the sale and distribution of abortion pills, but also make it illegal to “provide information” on how to obtain an abortion-inducing drug. Simply having an online conversation about mifepristone or exchanging emails about it could run afoul of the law.  

On top of going after online speakers who create and post content themselves, the bill also targeted social media platforms, websites, email services, messaging apps, and any other “interactive computer service” simply for hosting or making that content available. This was a clear attempt by Texas legislators to keep people from learning about abortion drugs, or even knowing that they exist, by wiping this information from the internet altogether.  

We laid out the glaring free-speech issues with S.B. 2880 and explained how the consequences would be dire if passed. And we asked everyone who cares about free speech to urge lawmakers to oppose this bill, and others like it. Fortunately, these concerns were heard, and the bill never became law.

Our team also spent much of the year fighting dangerous age verification legislation, often touted as “child safety” bills, at both the federal and state level. We raised the alarm on how age verification laws pose significant challenges for users trying to access critical content—including vital information about sexual and reproductive health. By age-gating the internet, these laws could result in websites requiring users to submit identification before accessing information about abortion or reproductive healthcare. This undermines the ability to remain private and anonymous while searching for abortion information online. 

Protecting Life-Saving Information Online 

Abortion information saves lives, and the internet is a primary (and sometimes only) source where people can access it.  

As attacks on abortion information intensify, EFF will continue to fight so that users can post, host, and access abortion-related content without fear of being silenced. We’ll keep pushing for greater accountability from social media platforms and fighting against harmful legislation aimed at censoring these vital resources. The fight is far from over, but we will remain steadfast in ensuring that everyone, regardless of where they live, can access life-saving information and make informed decisions about their health and rights.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Jennifer Pinsof

EFF, Open Rights Group, Big Brother Watch, and Index on Censorship Call on UK Government to Reform or Repeal Online Safety Act

1 week 5 days ago

Since the Online Safety Act took effect in late July, UK internet users have made it very clear to their politicians that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act (OSA) hit over 400,000 signatures. 

In the months since, more than 550,000 people have petitioned Parliament to repeal or reform the Online Safety Act, making it one of the largest public expressions of concern about a UK digital law in recent history. The OSA has galvanized swathes of the UK population, and it’s high time for politicians to take that seriously. 

Last week, EFF joined Open Rights Group, Big Brother Watch, and Index on Censorship in sending a briefing to UK politicians urging them to listen to their constituents and reform or repeal the Online Safety Act ahead of this week’s Parliamentary petition debate on 15 December.

The legislation is a threat to user privacy, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and effectively blocks millions of people without a personal device or form of ID from accessing the internet. The briefing highlights how, in the months since the OSA came into effect, we have seen the legislation:

  1. Make it harder for not-for-profits and community groups to run their own websites. 
  2. Result in the wrong types of content being taken down.
  3. Lead to age-assurance being applied widely to all sorts of content.

Our briefing continues:

“Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.”

The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can coexist.

If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

Read the briefing in full here.

Update, 17 Dec 2025: this article was edited to include the word reform alongside repeal. 

Paige Collings
Checked
5 hours 47 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed