Online Gaming’s Final Boss: The Copyright Bully

6 hours 56 minutes ago

Since earliest days of computer games, people have tinkered with the software to customize their own experiences or share their vision with others. From the dad who changed the game’s male protagonist to a girl so his daughter could see herself in it, to the developers who got their start in modding, games have been a medium where you don’t just consume a product, you participate and interact with culture.

For decades, that participatory experience was a key part of one of the longest-running video games still in operation: Everquest. Players had the official client, acquired lawfully from EverQuest’s developers, and modders figured out how to enable those clients to communicate with their own servers and then modify their play experience – creating new communities along the way.

Everquest’s copyright owners implicitly blessed all this. But the current owners, a private equity firm called Daybreak, want to end that independent creativity. They are using copyright claims to threaten modders who wanted to customize the EverQuest experience to suit a different playstyle, running their own servers where things worked the way they wanted. 

One project in particular is in Daybreak’s crosshairs: “The Hero’s Journey” (THJ). Daybreak claims THJ has infringed its copyrights in Everquest visuals and character, cutting into its bottom line.

Ordinarily, when a company wants to remedy some actual harm, its lawyers will start with a cease-and-desist letter and potentially pursue a settlement. But if the goal is intimidation, a rightsholder is free to go directly to federal court and file a complaint. That’s exactly what Daybreak did, using that shock-and-awe approach to cow not only The Hero’s Journey team, but unrelated modders as well.

Daybreak’s complaint seems to have dazzled the judge in the case by presenting side-by-side images of dragons and characters that look identical in the base game and when using the mod, without explaining that these images are the ones provided by EverQuest’s official client, which players have lawfully downloaded from the official source. The judge wound up short-cutting the copyright analysis and issuing a ruling that has proven devastating to the thousands of players who are part of EverQuest modding communities.

Daybreak and the developers of The Hero’s Journey are now in private arbitration, and Daybreak has wasted no time in sending that initial ruling to other modders. The order doesn’t bind anyone who’s unaffiliated with The Hero’s Journey, but it’s understandable that modders who are in it for fun and community would cave to the implied threat that they could be next.

As a result, dozens of fan servers have stopped operating. Daybreak has also persuaded the maintainers of the shared server emulation software that most fan servers rely upon, EQEmulator, to adopt terms of service that essentially ban any but the most negligible modding. The terms also provide that “your operation of an EQEmulator server is subject to Daybreak’s permission, which it may revoke for any reason or no reason at any time, without any liability to you or any other person or entity. You agree to fully and immediately comply with any demand from Daybreak to modify, restrict, or shut down any EQEmulator server.” 

This is sadly not even an uncommon story in fanspaces—from the dustup over changes to the Dungeons and Dragons open gaming license to the “guidelines” issued by CBS for Star Trek fan films, we see new generations of owners deciding to alienate their most avid fans in exchange for more control over their new property. It often seems counterintuitive—fans are creating new experiences, for free, that encourage others to get interested in the original work.

Daybreak can claim a shameful victory: it has imposed unilateral terms on the modding community that are far more restrictive than what fair use and other user rights would allow. In the process, it is alienating the very people it should want to cultivate as customers: hardcore Everquest fans. If it wants fans to continue to invest in making its games appeal to broader audiences and serve as testbeds for game development and sources of goodwill, it needs to give the game’s fans room to breathe and to play.

If you’ve been a target of Daybreak’s legal bullying, we’d love to hear from you; email us at info@eff.org.

Corynne McSherry

Speaking Freely: Sami Ben Gharbia

7 hours 41 minutes ago

Interviewer: Jillian York

Sami Ben Gharbia is a Tunisian human rights campaigner, blogger, writer and freedom of expression advocate. He founded Global Voices Advocacy, and is the co-founder and current publisher of the collective media organization Nawaat, which won the EFF Award in 2011

Jillian York: So first, what is your personal definition, or how do you conceptualize freedom of expression?

Sami Ben Gharbia: So for me, freedom of expression, it is mainly as a human. Like, I love the definition of Arab philosophers to human beings, we call it “speaking animal”. So that's the definition in logic, like the science of logic, meditated on by the Greeks, and that defines a human being as a speaking animal, which means later on. Descartes, the French philosopher, describes it like the Ergo: I think, so I am. So the act of speaking is an act of thinking, and it's what makes us human. So this is my definition that I love about freedom of expression, because it's the condition, the bottom line of our human being. 

JY: I love that. Is that something that you learned about growing up?

SBG: You mean, like, reading it or living?

JY: Yeah, how did you come to this knowledge?

SBG: I read a little bit of logics, like science of logic, and this is the definition that the Arabs give to define what is a human being; to differentiate us from, from plants or animals, or, I don't know, rocks, et cetera. So the humans are speaking, animals, 

JY: Oh, that's beautiful. 

SBG: And by speaking, it's in the Arabic definition of the word speaking, it's thinking. It's equal to thinking. 

JY: At what point, growing up, did you realize…what was the turning point for you growing up in Tunisia and realizing that protecting freedom of expression was important?

SBG: Oh, I think, I was born in 1967 and I grew up under an authoritarian regime of the “father” of this Tunisian nation, Bourghiba, the first president of Tunisia, who got us independence from France. And during the 80s, it was very hard to find even books that speak about philosophy, ideology, nationalism, Islamism, Marxism, etc. So to us, almost everything was forbidden. So you need to hide the books that you smuggle from France or from libraries from other cities, et cetera. You always hide what you are reading because you do not want to expose your identity, like you are someone who is politically engaged or an activist. So, from that point, I realized how important freedom of expression is, because if you are not allowed even to read or to buy or to exchange books that are deemed to be controversial or are so politically unacceptable under an authoritarian regime, that's where the fight for freedom of expression should be at the forefront of of any other fights. That's the fight that we need to engage in in order to secure other rights and freedoms.

JY: You speak a number of languages, at what point did you start reading and exploring other languages than the one that you grew up speaking?

SBG: Oh, I think, well, we learn Arabic, French and English in school, and like, primary school, secondary school, so these are our languages that we take from school and from our readings, etc, and interaction with other people in Tunisia. But my first experience living in a country that speaks another language that I didn't know was in Iran. So I spent, in total, one and a half years there in Iran, where I started to learn a fourth language that I really intended to use. It's not a Latin language. It is a special language, although they use almost the same letters and alphabet with some difference in pronunciation and writing, but but it was easy for an Arab speaking native Tunisian to learn Farsi due to the familiarity with the alphabets and familiarity with the pronunciation of most of the alphabet itself. So, that's the first case where I was confronted with a foreign language. It was Iran. And then during my exile in the Netherlands, I was confronted by another family of languages, which is Dutch from the family of Germanic languages, and that's the fifth language that I learned in the Netherlands. 

JY: Wow. And how do you feel that language relates to expression? For you?

SBG: I mean…language, it's another word. It's another universe. Because language carries culture, carries knowledge, carries history, customs. So it's a universe that is living. And once you learn to speak a new language, actually, you embrace another culture. You are more open in the way of understanding and accepting differences between other cultures, and I think that's how it makes your openness much more elastic. Like you accept other cultures more, other identities, and then you are not afraid anymore. You're not scared anymore from other identities, let's say, because I think the problem of civilization and crisis or conflict starts from ignorance—like we don't know the others, we don't know the language, we don't know the customs, the culture, the heritage, the history. That's why we are scared of other people. So the language is the first, let's say, window to other identity and acceptance of other people

JY: And how many languages do you speak now?

SBG: Oh, well, I don't know. Five for sure, but since I moved to exile a second time now, to Spain, I started learning Spanish, and I've been traveling a lot in Italy, started learning some some Italian, but it is confusing, because both are Latin languages, and they share a lot of words, and so it is confusing, but it is funny. I'm not that young to learn quickly, but I'm 58 years old, so it's not easy for someone my age to learn a new language quickly, especially when you are confused about languages from the same family as Latin.

JY: Oh, that's beautiful, though. I love that. All right, now I want to dig into the history of [2011 EFF Award winner] Nawaat. How did it start?

SBG: So Nawaat started as a forum, like in the early 2000s, even before the phenomena of blogs. Blogs started later on, maybe 2003-4, when they became the main tools for expression. Before that, we had forums where people debate ideas, anything. So it started as a forum, multiple forums hosted on the same domain name, which is Nawaat.org and little by little, we adopted new technology. We moved it. We migrated the database from from the forum to CMS, built a new website, and then we started building the website or the blog as a collective blog where people can express themselves freely, and in a political context where, similar to many other countries, a lot of people express themselves through online platforms because they are not allowed to express themselves freely through television or radio or newspaper or magazines in in their own country. 

So it started mainly as an exiled media. It wasn't journalistically oriented or rooted in journalism. It was more of a platform to give voices to the diaspora, mainly the exiled Tunisian diaspora living in exile in France and in England and elsewhere. So we published Human Rights Reports, released news about the situation in Tunisia. We supported the opposition in Tunisia. We produced videos to counter the propaganda machine of the former President Ben Ali, etc. So that's how it started and evolved little by little through the changing in the tech industry, from forums to blogs and then to CMS, and then later on to to adopt social media accounts and pages. So this is how it started and why we created it that like that was not my decision. It was a friend of mine, we were living in exile, and then we said, “why not start a new platform to support the opposition and this movement in Tunisia?” And that's how we did it at first, it was fun, like it was something like it was a hobby. It wasn't our work. I was working somewhere else, and he was working something else. It was our, let's say hobby or pastime. And little by little, it became our, our only job, actually.

JY: And then, okay, so let's come to 2011. I want to hear now your perspective 14 years later. What role do you really feel that the internet played in Tunisia in 2011?

SBG: Well, it was a hybrid tool for liberation, etc. We know the context of the internet freedom policy from the US we know, like the evolution of Western interference within the digital sphere to topple governments that are deemed not friendly, etc. So Tunisia was like, a friend of the West, very friendly with France and the United States and Europe. They loved the dictatorship in Tunisia, in a way, because it secured the border. It secured the country from, by then, the Islamist movement, et cetera. So the internet did play a role as a platform to spread information and to highlight the human rights abuses that are taking place in Tunisia and to counter the narrative that is being manipulated then by the government agency, state agency, public broadcast channel, television news agency, etc. 

And I think we managed it like the big impact of the internet and the blogs by then and platforms like now. We adopted English. It was the first time that the Tunisian opposition used English in its discourse, with the objective to bridge the gap between the traditional support for opposition and human rights in Tunisia that was mainly was coming from French NGOs and human rights organization towards international support, and international support that is not only coming from the traditional, usual suspects of Human Rights Watch, Amnesty International, Freedom House, et cetera. Now we wanted to broaden the spectrum of the support and to reach researchers, to reach activists, to reach people who are writing about freedom elsewhere. So we managed to break the traditional chain of support between human rights movements or organizations and human rights activists in Tunisia, and we managed to broaden that and to reach other people, other audiences that were not really touching what was going on in Tunisia, and I think that's how the Internet helped in the field of international support to the struggle in Tunisia and within Tunisia. 

The impact was, I think, important to raise awareness about human rights abuses in the country, so people who are not really politically knowledgeable about the situation due to the censorship and due to the problem of access to information which was lacking in Tunisia, the internet helped spread the knowledge about the situation and help speed the process of the unrest, actually. So I think these are the two most important impacts within the country, to broaden the spectrum of the people who are reached and targeted by the discourse of political engagement and activism, and the second is to speed the process of consciousness and then the action in the street. So this is how I think the internet helped. That's great, but it wasn't the main tool. I mean, the main tool was really people on the ground and maybe people who didn't have access to the internet at all.

JY: That makes sense. So what about the other work that you were doing around that time with the Arabloggers meetings and Global Voices and the Arab Techies network. Tell us about that.

SBG: Okay, so my position was the founding director of Global Voices Advocacy, I was hired to found this, this arm of advocacy within Global Voices. And that gave me the opportunity to understand other spheres, linguistic spheres, cultural spheres. So it was beyond Tunisia, beyond the Arab world and the region. I was in touch with activists from all over the world. I mean by activists, I mean digital activists, bloggers that are living in Latin America or in Asia or in Eastern Europe, et cetera, because one of the projects that I worked on was Threatened Voices, which was a map of all people who were targeted because of their online activities. That gave me the opportunity to get in touch with a lot of activists.

And then we organized the first advocacy meeting. It was in Budapest, and we managed to invite like 40 or 50 activists from all over the world, from China, Hong Kong, Latin America, the Arab world, Eastern Europe, and Africa. And that broadened my understanding of the freedom of expression movement and how technology is being used to foster human rights online, and then the development of blog aggregators in the world, and mainly in the Arab world, like, each country had its own blog aggregator. That helped me understand those worlds, as did Global Voices. Because Global Voices was bridging the gap between what is being written elsewhere, through the translation effort of Global Voices to the English speaking world and vice versa, and the role played by Global Voices and Global Voices Advocacy made the space and the distance between all those blogospheres feel very diminished. We were very close to the blogosphere movement in Egypt or in Morocco or in Syria and elsewhere. 

And that's how, Alaa Abd El Fattah and Manal Bahey El-Din Hassan and myself, we started thinking about how to establish the Arab Techies collective, because the needs that we identified—there was a gap. There was a lack of communication between pure techies, people who are writing code, building software, translating tools and even online language into Arabic, and the people who are using those tools. The bloggers, freedom of expression advocates, et cetera. And because there are some needs that were not really met in terms of technology, we thought that bringing these two words together, techies and activists would help us build new tools, translate new tools, make tools available to the broader internet activists. And that's how the Arab Techies collective was born in Cairo, and then through organizing the Arabloggers meetings two times in Beirut, and then the third in Tunisia, after the revolution. 

It was a momentum for us, because it, I think it was the first time in Beirut that we brought bloggers from all Arab countries, like it was like a dream that was really unimaginable but at a certain point, but we made that happen. And then what they call the Arab revolution happened, and we lost contact with each other, because everybody was really busy with his or her own country's affairs. So Ali was really fully engaged in Egypt myself, I came back to Tunisia and was fully engaged in Tunisia, so we lost contact, because all of us were having a lot of trouble in their own country. A lot of those bloggers, like who attended the Arab bloggers meetings, few of them were arrested, few of them were killed, like Bassel was in prison, people were in exile, so we lost that connection and those conferences that brought us together, but then we've seen SMEX like filling that gap and taking over the work that started by the Arab techies and the Arab bloggers conference.

JY: We did have the fourth one in 2014 in Amman. But it was not the same. Okay, moving forward, EFF recently published this blog post reflecting on what had just happened to Nawaat, when you and I were in Beirut together a few weeks ago. Can you tell me what happened?

SBG: What happened is that they froze the work of Nawaat. Legally, although the move wasn't legal, because for us, we were respecting the law in Tunisia. But they stopped the activity of Nawaat for one month. And this is according to an article from the NGO legal framework, that the government can stop the work of an NGO if the NGO doesn't respect certain legal conditions; for them Nawaat didn't provide enough documentation that was requested by the government, which is a total lie, because we always submit all documentation on time to the government. So they stopped us from doing our job, which is what we call in Tunisia, an associated media. 

It's not a company, it's not a business. It's not a startup. It is an NGO that is managing the website and the media, and now it has other activities, like we have the online website, the main website, but we also have a festival, which is a three day festival in our headquarters. We have offline debates. We bring actors, civil society, activists, politicians, to discuss important issues in Tunisia. We have a quality print magazine that is being distributed and sold in Tunisia. We have an innovation media incubation program where we support people to build projects through journalism and technology. So we have a set of offline projects that stopped for a month, and we also stopped publishing anything on the website and all our social media accounts. And now what? It's not the only one. They also froze the work of other NGOs, like the Tunisian Association of Democratic Women, which is really giving support to women in Tunisia. Also the Tunisian Forum for Social and Economic Rights, which is a very important NGO giving support to grassroots movements in Tunisia. And they stopped Aswat Nissa, another NGO that is giving support to women in Tunisia. So they targeted impactful NGOs. 

So now what? It's not an exception, and we are very grateful to the wave of support that we got from Tunisian fellow citizens, and also friendly NGOs like EFF and others who wrote about the case. So this is the context in which we are living, and we are afraid that they will go for an outright ban of the network in the future. This is the worst case scenario that we are preparing ourselves for, and we might face this fate of seeing it close its doors and stop all offline activities that are taking place in Tunisia. Of course, the website will remain. We need to find a way to keep on producing, although it will really be risky for our on-the-ground journalists and video reporters and newsroom team, but we need to find a solution to keep the website alive. As an exiled media it's a very probable scenario and approach in the future, so we might go back to our exile media model, and we will keep on fighting.

JY: Yes, of course. I'm going to ask the final question. We always ask who someone’s free speech hero is, but I’m going to frame it differently for you, because you're somebody who influenced a lot of the way that I think about these topics. And so who's someone that has inspired you or influenced your work?

SBG: Although I started before the launch of WikiLeaks, for me Julian Assange was the concretization of the radical transparency movement that we saw. And for me, he is one of the heroes that really shaped a decade of transparency journalism and impacted not only the journalism industry itself, like even the established and mainstream media, such as the New York Times, Washington Post, Der Spiegel, et cetera. Wikileaks partnered with big media, but not only with big media, also with small, independent newsrooms in the Global South. So for me, Julian Assange is an icon that we shouldn't forget. And he is an inspiration in the way he uses technology to to fight against big tech and state and spy agencies and war crimes.

Jillian C. York

Fair Use is a Right. Ignoring It Has Consequences.

1 day 4 hours ago

Fair use is not just an excuse to copy—it’s a pillar of online speech protection, and disregarding it in order to lash out at a critic should have serious consequences. That’s what we told a federal court in Channel 781 News v. Waltham Community Access Corporation, our case fighting copyright abuse on behalf of citizen journalists.

Waltham Community Access Corporation (WCAC), a public access cable station in Waltham, Massachusetts, records city council meetings on video. Channel 781 News (Channel 781), a group of volunteers who report on the city council, curates clips from those recordings for its YouTube channel, along with original programming, to spark debate on issues like housing and transportation. WCAC sent a series of takedown notices under the Digital Millennium Copyright Act (DMCA), accusing Channel 781 of copyright infringement. That led to YouTube deactivating Channel 781’s channel just days before a critical municipal election. Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its takedown notices under an important but underutilized provision of the DMCA.

The DMCA gives copyright holders a powerful tool to take down other people’s content from platforms like YouTube. The “notice and takedown” process requires only an email, or filling out a web form, in order to accuse another user of copyright infringement and have their content taken down. And multiple notices typically lead to the target’s account being suspended, because doing so helps the platform avoid liability. There’s no court or referee involved, so anyone can bring an accusation and get a nearly instantaneous takedown.

Of course, that power invites abuse. Because filing a DMCA infringement notice is so easy, there’s a temptation to use it at the drop of a hat to take down speech that someone doesn’t like. To prevent that, before sending a takedown notice, a copyright holder has to consider whether the use they’re complaining about is a fair use. Specifically, the copyright holder needs to form a “good faith belief” that the use is not “authorized by the law,” such as through fair use.

WCAC didn’t do that. They didn’t like Channel 781 posting short clips from city council meetings recorded by WCAC as a way of educating Waltham voters about their elected officials. So WCAC fired off DMCA takedown notices at many of Channel 781’s clips that were posted on YouTube.

WCAC claims they considered fair use, because a staff member watched a video about it and discussed it internally. But WCAC ignored three of the four fair use factors. WCAC ignored that their videos had no creativity, being nothing more than records of public meetings. They ignored that the clips were short, generally including one or two officials’ comments on a single issue. They ignored that the clips caused WCAC no monetary or other harm, beyond wounded pride. And they ignored facts they already knew, and that are central to the remaining fair use factor: by excerpting and posting the clips with new titles, Channel 781 was putting its own “spin” on the material - in other words, transforming it. All of these facts support fair use.

Instead, WCAC focused only on the fact that the clips they targeted were not altered further or put into a larger program. Looking at just that one aspect of fair use isn’t enough, and changing the fair use inquiry to reach the result they wanted is hardly the way to reach a “good faith belief.”

That’s why we’re asking the court to rule that WCAC’s conduct violated the law and that they should pay damages. Copyright holders need to use the powerful DMCA takedown process with care, and when they don’t, there needs to be consequences.

Mitch Stoltz

Stand Together to Protect Democracy

1 day 8 hours ago

What a year it’s been. We’ve seen technology unfortunately misused to supercharge the threats facing democracy: dystopian surveillance, attacks on encryption, and government censorship. These aren’t abstract dangers. They’re happening now, to real people, in real time.

EFF’s lawyers, technologists, and activists are pushing back. But we need you in this fight.

JOIN EFF TODAY!

MAKE A YEAR END DONATION—HELP EFF UNLOCK CHALLENGE GRANTS!

If you donate to EFF before the end of 2025, you’ll help fuel the legal battles that defend encryption, the tools that protect privacy, and the advocacy that stops dangerous laws—and you’ll help unlock up to $26,200 in challenge grants. 

📣 Stand Together: That's How We Win 📣

The past year confirmed how urgently we need technologies that protect us, not surveil us. EFF has been in the fight every step of the way, thanks to support from people like you.

Get free gear when you join EFF!

This year alone EFF:

  • Launched a resource hub to help users understand and fight back against age verification laws.
  • Challenged San Jose's unconstitutional license plate reader database in court.
  • Sued demanding answers when ICE spotting apps were mysteriously taken offline.
  • Launched Rayhunter to detect cell site simulators.
  • Pushed back hard against the EU's Chat Proposal that would break encryption for millions.

After 35 years of defending digital freedoms, we know what's at stake: we must protect your ability to speak freely, organize safely, and use technology without surveillance.

We have opportunities to win these fights, and you make each victory possible. Donate to EFF by December 31 and help us unlock additional grants this year!

Already an EFF Member? Help Us Spread the Word!

EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:


We need to stand together and ensure technology works for us, not against us. Donate any amount to EFF by Dec 31, and you'll help unlock challenge grants! https://eff.org/yec
Bluesky | Facebook | LinkedIn | Mastodon
(more at eff.org/social)

_________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Christian Romero

The Breachies 2025: The Worst, Weirdest, Most Impactful Data Breaches of the Year

2 days 6 hours ago

Another year has come and gone, and with it, thousands of data breaches that affect millions of people. The question these days is less, Is my information in a data breach this year? and more How many data breaches had my information in them this year? 

Some data breaches are more noteworthy than others. Where one might affect a small number of people and include little useful information, like a name or email address, others might include data ranging from a potential medical diagnosis to specific location information. To catalog and talk about these breaches we created the Breachies, a series of tongue-in-cheek awards, to highlight the most egregious data breaches. 

In most cases, if these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data. Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. It has become such a common occurrence that it’s easy to lose track of which breaches affect you, and just assume your information is out there somewhere. Still, a few steps can help protect your information.

With that, let’s get to the awards.

The Winners

The Say Something Without Saying Anything Award: Mixpanel

We’ve long warned that apps delivering your personal information to third-parties, even if they aren’t the ad networks directly driving surveillance capitalism, presents risks and a salient target for hackers. The more widespread your data, the more places attackers can go to find it. Mixpanel, a data analytics company which collects information on users of any app which incorporates its SDK, suffered a major breach in November this year. The service has been used by a wide array of companies, including the Ring Doorbell App, which we reported on back in 2020 delivering a trove of information to Mixpanel, and PornHub, which despite not having worked with the company since 2021, had its historical record of paying subscribers breached.    

There’s a lot we still don’t know about this data breach, in large part because the announcement about it is so opaque, leaving reporters with unanswered questions about how many were affected, if the hackers demanded a ransom, and if Mixpanel employee accounts utilized standard security best practices. One thing is clear, though: the breach was enough for OpenAI to drop them as a provider, disclosing critical details on the breach in a blog post that Mixpanel’s own announcement conveniently failed to mention.

The worst part is that, as a data analytics company providing libraries which are included in a broad range of apps, we can surmise that the vast majority of people affected by this breach have no direct relationship with Mixpanel, and likely didn’t even know that their devices were delivering data to the company. These people deserve better than vague statements by companies which profit off of (and apparently insufficiently secure) their data.

The We Still Told You So Award: Discord

Last year, AU10TIX won our first The We Told You So Award because as we predicted in 2023, age verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits. Like clockwork, they did. It was our first We Told You So Breachies award, but we knew it wouldn’t be the last. 

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy

Nonetheless, this year’s winner of The We Still Told You So Breachies Award is the messaging app, Discord — once known mainly for gaming communities, it now hosts more than 200 million monthly active users and is widely used to host fandom and community channels. 

In September of this year, much of Discord’s age verification data was breached — including users’ real names, selfies, ID documents, email and physical addresses, phone numbers, IP addresses, and other contact details or messages provided to customer support. In some cases, “limited billing information” was also accessed—including payment type, the last four digits of credit card numbers, and purchase histories. 

Technically though, it wasn’t Discord itself that was hacked but their third-party customer support provider — a company called Zendesk—that was compromised, allowing attackers to access Discord’s user data. Either way, it’s Discord users who felt the impact. 

The Tea for Two Award: Tea Dating Advice and TeaOnHer

Speaking of age verification, Tea, the dating safety app for women, had a pretty horrible year for data breaches. The app allows users to anonymously share reviews and safety information about their dates with men—helping keep others safe by noting red flags they saw during their date.

Since Tea is aimed at women’s safety and dating advice, the app asks new users to upload a selfie or photo ID to verify their identity and gender to create an account. That’s some pretty sensitive information that the app is asking you to trust it with! Back in July, it was reported that 72,000 images had been leaked from the app, including 13,000 images of photo IDs and 59,000 selfies. These photos were found via an exposed database hosted on Google’s mobile app development platform, Firebase. And if that isn’t bad enough, just a week later a second breach exposed private messages between users, including messages with phone numbers, abortion planning, and discussions about cheating partners. This breach included more than 1.1 million messages from early 2023 all the way to mid-2025, just before the breach was reported. Tea released a statement shortly after, temporarily disabling the chat feature.

But wait, there’s more. A completely different app based on the same idea, but for men, also suffered a data breach. TeaOnHer failed to protect similar sensitive data. In August, TechCrunch discovered that user information — including emails, usernames, and yes, those photo IDs and selfies — was accessible through a publicly available web address. Even worse? TechCrunch also found the email address and password the app’s creator uses to access the admin page.

Breaches like this are one of the reasons that EFF shouts from the rooftops against laws that mandate user verification with an ID or selfie. Every company that collects this information becomes a target for data breaches — and if a breach happens, you can’t just change your face. 

The Just Stop Using Tracking Tech Award: Blue Shield of California

Another year, another data breach caused by online tracking tools. 

In April, Blue Shield of California revealed that it had shared 4.7 million people’s health data with Google by misconfiguring Google Analytics on its website. The data, which may have been used for targeted advertising, included: people’s names, insurance plan details, medical service providers, and patient financial responsibility. The health insurance company shared this information with Google for nearly three years before realizing its mistake.

If this data breach sounds familiar, it’s because it is: last year’s Just Stop Using Tracking Tech award also went to a healthcare company that leaked patient data through tracking code on its website. Tracking tools remain alarmingly common on healthcare websites, even after years of incidents like this one. These tools are marketed as harmless analytics or marketing solutions, but can expose people’s sensitive data to advertisers and data brokers. 

EFF’s free Privacy Badger extension can block online trackers, but you shouldn’t need an extension to stop companies from harvesting and monetizing your medical data. We need a strong, federal privacy law and ban on online behavioral advertising to eliminate the incentives driving companies to keep surveilling us online. 

The Hacker's Hall Pass Award: PowerSchool

 In December 2024, PowerSchool, the largest provider of student information systems in the U.S., gave hackers access to sensitive student data. The breach compromised personal information of over 60 million students and teachers, including Social Security numbers, medical records, grades, and special education data. Hackers exploited PowerSchool’s weak security—namely, stolen credentials to their internal customer support portal—and gained unfettered access to sensitive data stored by school districts across the country.

PowerSchool failed to implement basic security measures like multi-factor authentication, and the breach affected districts nationwide. In Texas alone, over 880,000 individuals’ data was exposed, prompting the state's attorney general to file a lawsuit, accusing PowerSchool of misleading its customers about security practices. Memphis-Shelby County Schools also filed suit, seeking damages for the breach and the cost of recovery.

While PowerSchool paid hackers an undisclosed sum to prevent data from being published, the company’s failure to protect its users’ data raises serious concerns about the security of K-12 educational systems. Adding to the saga, a Massachusetts student, Matthew Lane, pleaded guilty in October to hacking and extorting PowerSchool for $2.85 million in Bitcoin. Lane faces up to 17 years in prison for cyber extortion and aggravated identity theft, a reminder that not all hackers are faceless shadowy figures — sometimes they’re just a college kid.

The Worst. Customer. Service. Ever. Award: TransUnion

Credit reporting giant TransUnion had to notify its customers this year that a hack nabbed the personal information of 4.4 million people. How'd the attackers get in? According to a letter filed with the Maine Attorney General's office obtained by TechCrunch, the problem was a “third-party application serving our U.S. consumer support operations.” That's probably not the kind of support they were looking for. 

TransUnion said in a Texas filing that attackers swept up “customers’ names, dates of birth, and Social Security numbers” in the breach, though it was quick to point out in public statements that the hackers did not access credit reports or “core credit data.” While it certainly could have been worse, this breach highlights the many ways that hackers can get their hands on information. Coming in through third-parties, companies that provide software or other services to businesses, is like using an unguarded side door, rather than checking in at the front desk. Companies, particularly those who keep sensitive personal information, should be sure to lock down customer information at all the entry points. After all, their decisions about who they do business with ultimately carry consequences for all of their customers — who have no say in the matter.

The Annual Microsoft Screwed Up Again Award: Microsoft

Microsoft is a company nobody feels neutral about. Especially in the infosec world. The myriad software vulnerabilities in Windows, Office, and other Microsoft products over the decades has been a source of frustration and also great financial rewards for both attackers and defenders. Yet still, as the saying goes: “nobody ever got fired for buying from Microsoft.” But perhaps, the times, they are a-changing. 

In July 2025, it was revealed that a zero-day security vulnerability in Microsoft’s flagship file sharing and collaboration software, SharePoint, had led to the compromise of over 400 organizations, including major corporations and sensitive government agencies such as the National Nuclear Security Administration (NNSA), the federal agency responsible for maintaining and developing the U.S. stockpile of nuclear weapons. The attack was attributed to three different Chinese government linked hacking groups. Amazingly, days after the vulnerability was first reported, there were still thousands of vulnerable self-hosted Sharepoint servers online. 

Zero-days happen to tech companies, large and small. It’s nearly impossible to write even moderately complex software that is bug and exploit free, and Microsoft can’t exactly be blamed for having a zero-day in their code. But when one company is the source of so many zero-days consistently for so many years, one must start wondering whether they should put all their eggs (or data) into a basket that company made. Perhaps if Microsoft’s monopolistic practices had been reined in back in the 1990s we wouldn’t be in a position today where Sharepoint is the defacto file sharing software for so many major organizations. And maybe, just maybe, this is further evidence that tech monopolies and centralization of data aren’t just bad for consumer rights, civil liberties, and the economy—but also for cybersecurity. 

The Silver Globe Award: Flat Earth Sun, Moon & Zodiac

Look, we’ll keep this one short: in October of last year, researchers found security issues in the flat earther app, Flat Earth, Sun, Moon, & Clock. In March of 2025, that breach was confirmed. What’s most notable about this, aside from including a surprising amount of information about gender, name, email addresses and date of birth, is that it also included users’ location info, including latitude and longitude. Huh, interesting.

The I Didn’t Even Know You Had My Information Award: Gravy Analytics

In January, hackers claimed they stole millions of people’s location history from a company that never should’ve had it in the first place: location data broker Gravy Analytics. The data included timestamped location coordinates tied to advertising IDs, which can reveal exceptionally sensitive information. In fact, researchers who reviewed the leaked data found it could be used to identify military personnel and gay people in countries where homosexuality is illegal

The breach of this sensitive data is bad, but Gravy Analytics’s business model of regularly harvesting and selling it is even worse. Despite the fact that most people have never heard of them, Gravy Analytics has managed to collect location information from a billion phones a day. The company has sold this data to other data brokers, makers of police surveillance tools, and the U.S. government

How did Gravy Analytics get this location information from people’s phones? The data broker industry is notoriously opaque, but this breach may have revealed some of Gravy Analytics’ sources. The leaked data referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. Many of these app developers said they had no relationship with Gravy Analytics. Instead, expert analysis of the data suggests it was harvested through the advertising ecosystem already connected to most apps. This breach provides further evidence that online behavioral advertising fuels the surveillance industry

Whether or not they get hacked, location data brokers like Gravy Analytics threaten our privacy and security. Follow EFF’s guide to protecting your location data and help us fight for legislation to dismantle the data broker industry. 

The Keeping Up With My Cybertruck Award: Teslamate

TeslaMate, a tool meant to track Tesla vehicle data (but which is not owned or operated by Tesla itself), has become a cautionary tale about data security. In August, a security researcher found more than 1,300 self-hosted TeslaMate dashboards were exposed online, leaking sensitive information such as vehicle location, speed, charging habits, and even trip details. In essence, your Cybertruck became the star of its own Keeping Up With My Cybertruck reality show, except the audience wasn’t made up of fans interested in your lifestyle, just random people with access to the internet.

TeslaMate describes itself as “that loyal friend who never forgets anything!” — but its lack of proper security measures makes you wish it would. This breach highlights how easily location data can become a tool for harassment or worse, and the growing need for legislation that specifically protects consumer location data. Without stronger regulations around data privacy, sensitive location details like where you live, work, and travel can easily be accessed by malicious actors, leaving consumers with no recourse.

The Disorder in the Courts Award: PACER

Confidentiality is a core principle in the practice of law. But this year a breach of confidentiality came from an unexpected source: a breach of the federal court filing system. In August, Politico reported that hackers infiltrated the Case Management/Electronic Case Files (CM/ECF) system, which uses the same database as PACER, a searchable public database for court records. Of particular concern? The possibility that the attack exposed the names of confidential informants involved in federal cases from multiple court districts. Courts across the country acted quickly to set up new processes to avoid the possibility of further compromises.

The leak followed a similar incident in 2021 and came on the heels of a warning to Congress that the file system is more than a little creaky. In fact, an IT official from the federal court system told the House Judiciary Committee that both systems are “unsustainable due to cyber risks, and require replacement.”

The Only Stalkers Allowed Award: Catwatchful

Just like last year, a stalkerware company was subject to a data breach that really should prove once and for all that these companies must be stopped. In this case, Catwatchful is an Android spyware company that sells itself as a “child monitoring app.” Like other products in this category, it’s designed to operate covertly while uploading the contents of a victim’s phone, including photos, messages, and location information.

This data breach was particularly harmful, as it included not just the email addresses and passwords on the customers who purchased the app to install on a victim’s phone, but also the data from the phones of 26,000 victims’ devices, which could include the victims’ photos, messages, and real-time location data.

This was a tough award to decide on because Catwatchful wasn’t the only stalkerware company that was hit this year. Similar breaches to SpyX, Cocospy, and Spyic were all strong contenders. EFF has worked tirelessly to raise the alarm on this sort of software, and this year worked with AV Comparatives to test the stalkerware detection rate on Android of various major antivirus apps.

The Why We’re Still Stuck on Unique Passwords Award: Plex

Every year, we all get a reminder about why using unique passwords for all our accounts is crucial for protecting our online identities. This time around, the award goes to Plex, who experienced a data breach that included customer emails, usernames, and hashed passwords (which is a fancy way of saying passwords are scrambled through an algorithm, but it is possible they could still be deciphered).

If this all sounds vaguely familiar to you for some reason, that’s because a similar issue also happened to Plex in 2022, affecting 15 million users. Whoops.

This is why it is important to use unique passwords everywhereA password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. Here’s how to turn that on for your Plex account.

The Uh, Yes, Actually, I Have Been Pwned Award: Troy Hunt’s Mailing List

Troy Hunt, the person behind Have I Been Pwned? and who has more experience with data breaches than just about anyone, also proved that anyone can be pwned. In a blog post, he details what happened to his mailing list:

You know when you're really jet lagged and really tired and the cogs in your head are just moving that little bit too slow? That's me right now, and the penny has just dropped that a Mailchimp phish has grabbed my credentials, logged into my account and exported the mailing list for this blog.

And he continues later:

I'm enormously frustrated with myself for having fallen for this, and I apologise to anyone on that list. Obviously, watch out for spam or further phishes and check back here or via the social channels in the nav bar above for more.

The whole blog is worth a read as a reminder that phishing can get anyone, and we thank Troy Hunt for his feedback on this and other breaches to include this year.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Delete old accounts: Sometimes, you’ll get a data breach notification for an account you haven’t used in years. This can be a nice reminder to delete that account, but it’s better to do so before a data breach happens, when possible. Try to make it a habit to go through and delete old accounts once a year or so. 
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 
(Dis)Honorable Mentions

According to one report, 2025 had already seen 2,563 data breaches by October, which puts the year on track to be one of the worst by the sheer number of breaches.

We did not investigate every one of these 2,500-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachies Award to every company that was breached this year. Still, here are some (dis)honorable mentions we wanted to highlight:

Salesforce, F5, Oracle, WorkComposer, Raw, Stiizy, Ohio Medical Alliance LLC, Hello Cake, Lovense, Kettering Health, LexisNexis, WhatsApp, Nexar, McDonalds, Congressional Budget Office, Doordash, Louis Vuitton, Adidas, Columbia University, Hertz, HCRG Care Group, Lexipol, Color Dating, Workday, Aflac, and Coinbase. And a special nod to last minute entrants Home Depot, 700Credit, and Petco.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Thorin Klosowski

🪪 Age Verification Is Coming for the Internet | EFFector 37.18

3 days 7 hours ago

The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.

In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.

Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Trends to Watch in the California Legislature

4 days 3 hours ago

If you’re a Californian, there are a few new state laws that you should know will be going into effect in the new year. EFF has worked hard in Sacramento this session to advance bills that protect privacy, fight surveillance, and promote transparency.

California’s legislature runs in a two-year cycle, meaning that it’s currently halftime for legislators. As we prepare for the next year of the California legislative session in January, it’s a good time to showcase what’s happened so far—and what’s left to do.

Wins Worth Celebrating

In a win for every Californian’s privacy rights, we were happy to support A.B. 566 (Assemblymember Josh Lowenthal). This is a common-sense law that makes California’s main consumer data privacy law, the California Consumer Privacy Act, more user-friendly. It requires that browsers support people’s rights to send opt-out signals, such as the global opt-out in Privacy Badger, to businesses. Managing your privacy as an individual can be a hard job, and EFF wants stronger laws that make it easier for you to do so.

Additionally, we were proud to advance government transparency by supporting A.B. 1524 (Judiciary Committee), which allows members of the public to make copies of public court records using their own devices, such as cell-phone cameras and overhead document scanners, without paying fees.

We also supported two bills that will improve law enforcement accountability at a time when we desperately need it. S.B. 627 (Senator Scott Wiener) prohibits law enforcement officers from wearing masks to avoid accountability (The Trump administration has sued California over this law). We also supported S.B. 524 (Asm. Jesse Arreguín), which requires law enforcement to disclose when a police report was written using artificial intelligence.

On the To-Do List for Next Year

On the flip side, we also stopped some problematic bills from becoming law. This includes S.B. 690 (Sen. Anna Caballero), which we dubbed the Corporate Coverup Act. This bill would have gutted California’s wiretapping statute by allowing businesses to ignore those privacy rights for “any business purpose.” Working with several coalition partners, we were able to keep that bill from moving forward in 2025. We do expect to see it come back in 2026, and are ready to fight back against those corporate business interests.

And, of course, not every fight ended in victory. There are still many areas where we have work left to do. California Governor Gavin Newsom vetoed a bill we supported, S.B. 7, which would have given workers in California greater transparency into how their employers use artificial intelligence and was sponsored by the California Federation of Labor Unions. S.B. 7  was vetoed in response to concerns from companies including Uber and Lyft, but we expect to continue working with the labor community on the ways AI affects the workplace in 2026.

Trends of Note

California continued a troubling years-long trend of lawmakers pushing problematic proposals that would require every internet user to verify their age to access information—often by relying on privacy-invasive methods to do so. Earlier this year EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online. We continue to raise these concerns, and would welcome working with any lawmaker in California on a better solution.

We also continue to keep a close eye on government data sharing. On this front, there is some good news. Several of the bills we supported this year sought to place needed safeguards on the ways various government agencies in California share data. These include: A.B. 82 (Asm. Chris Ward) and S.B. 497 (Wiener), which would add privacy protections to data collected by the state about those who may be receiving gender-affirming or reproductive health care; A.B. 1303 (Asm. Avelino Valencia), which prohibits warrantless data sharing from California’s low-income broadband program to immigration and other government officials; and S.B. 635 (Sen. Maria Elena Durazo), which places similar limits on data collected from sidewalk vendors.

We are also heartened to see California correct course on broad government data sharing. Last session, we opposed A.B. 518 (Asm. Buffy Wicks), which let state agencies ignore existing state privacy law to allow broader information sharing about people eligible for CalFresh—the state’s federally funded food assistance program. As we’ve seen, the federal government has since sought data from food assistance programs to use for other purposes. We were happy to have instead supported A.B. 593 this year, also authored by Asm. Wicks—which reversed course on that data sharing.

We hope to see this attention to the harms of careless government data sharing continue. EFF’s sponsored bill this year, A.B. 1337, would update and extend vital privacy safeguards present at the state agency level to counties and cities. These local entities today collect enormous amounts of data and administer programs that weren’t contemplated when the original law was written in 1977. That information should be held to strong privacy standards.

We’ve been fortunate to work with Asm. Chris Ward, who is also the chair of the LGBTQ Caucus in the legislature, on that bill. The bill stalled in the Senate Judiciary Committee during the 2025 legislative session, but we plan to bring it back in the next session with a renewed sense of urgency.

Hayley Tsukayama

EFF, Open Rights Group, Big Brother Watch, and Index on Censorship Call on UK Government to Reform or Repeal Online Safety Act

4 days 12 hours ago

Since the Online Safety Act took effect in late July, UK internet users have made it very clear to their politicians that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act (OSA) hit over 400,000 signatures. 

In the months since, more than 550,000 people have petitioned Parliament to repeal or reform the Online Safety Act, making it one of the largest public expressions of concern about a UK digital law in recent history. The OSA has galvanized swathes of the UK population, and it’s high time for politicians to take that seriously. 

Last week, EFF joined Open Rights Group, Big Brother Watch, and Index on Censorship in sending a briefing to UK politicians urging them to listen to their constituents and reform or repeal the Online Safety Act ahead of this week’s Parliamentary petition debate on 15 December.

The legislation is a threat to user privacy, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and effectively blocks millions of people without a personal device or form of ID from accessing the internet. The briefing highlights how, in the months since the OSA came into effect, we have seen the legislation:

  1. Make it harder for not-for-profits and community groups to run their own websites. 
  2. Result in the wrong types of content being taken down.
  3. Lead to age-assurance being applied widely to all sorts of content.

Our briefing continues:

“Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.”

The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can coexist.

If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

Read the briefing in full here.

Update, 17 Dec 2025: this article was edited to include the word reform alongside repeal. 

Paige Collings

EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate

6 days 13 hours ago

The UK Parliament convened earlier this week to debate a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.

The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country. 

But the case for digital identification has not been made. 

As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:

  1. Mission creep
  2. Infringements on privacy rights
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.

Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.

Read our civil society briefing in full here.

Paige Collings

Thousands Tell the Patent Office: Don’t Hide Bad Patents From Review

1 week 1 day ago

A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.

EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.

The Public Doesn’t Want To Bury Patent Challenges

These thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment. 

Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy. 

The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.

EFF’s Comment To USPTO

In our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.

Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward. 

A Broad Coalition Supports IPR

A wide range of groups told the USPTO the same thing: don’t cut off access to IPR.

Open Source and Developer Communities 

The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work. 

Patent Law Scholars

A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.” 

Patient Advocates

Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.” 

Small Businesses 

Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court. 

What Happens Next

The USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.

Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters. 

We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve  the public’s ability to challenge bad patents. 

Joe Mullin

Why Isn’t Online Age Verification Just Like Showing Your ID In Person?

1 week 1 day ago

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

One of the most common refrains we hear from age verification proponents is that online ID checks are nothing new. After all, you show your ID at bars and liquor stores all the time, right? And it’s true that many places age-restrict access in-person to various goods and services, such as tobacco, alcohol, firearms, lottery tickets, and even tattoos and body piercings.

But this comparison falls apart under scrutiny. There are fundamental differences between flashing your ID to a bartender and uploading government documents or biometric data to websites and third-party verification companies. Online age-gating is more invasive, affects far more people, and poses serious risks to privacy, security, and free speech that simply don't exist when you buy a six-pack at the corner store.

Online age verification burdens many more people.

Online age restrictions are imposed on many, many more users than in-person ID checks. Because of the sheer scale of the internet, regulations affecting online content sweep in an enormous number of adults and youth alike, forcing them to disclose sensitive personal data just to access lawful speech, information, and services. 

Additionally, age restrictions in the physical world affect only a limited number of transactions: those involving a narrow set of age-restricted products or services. Typically this entails a bounded interaction about one specific purchase.

Online age verification laws, on the other hand, target a broad range of internet activities and general purpose platforms and services, including social media sites and app stores. And these laws don’t just wall off specific content deemed harmful to minors (like a bookstore would); they age-gate access to websites wholesale. This is akin to requiring ID every time a customer walks into a convenience store, regardless of whether they want to buy candy or alcohol.

There are significant privacy and security risks that don’t exist offline.

In offline, in-person scenarios, a customer typically provides their physical ID to a cashier or clerk directly. Oftentimes, customers need only flash their ID for a quick visual check, and no personal information is uploaded to the internet, transferred to a third-party vendor, or stored. Online age-gating, on the other hand, forces users to upload—not just momentarily display—sensitive personal information to a website in order to gain access to age-restricted content. 

This creates a cascade of privacy and security problems that don’t exist in the physical world. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee it will be handled securely. You have no direct control over who receives and stores your personal data, where it is sent, or how it may be accessed, used, or leaked outside the immediate verification process. 

Data submitted online rarely just stays between you and one other party. All online data is transmitted through a host of third-party intermediaries, and almost all websites and services also host a network of dozens of private, third-party trackers managed by data brokers, advertisers, and other companies that are constantly collecting data about your browsing activity. The data is shared with or sold to additional third parties and used to target behavioral advertisements. Age verification tools also often rely on third parties just to complete a transaction: a single instance of ID verification might involve two or three different third-party partners, and age estimation services often work directly with data brokers to offer a complete product. Users’ personal identifying data then circulates among these partners. 

All of this increases the likelihood that your data will leak or be misused. Unfortunately, data breaches are an endemic part of modern life, and the sensitive, often immutable, personal data required for age verification is just as susceptible to being breached as any other online data. Age verification companies can be—and already have been—hacked. Once that personal data gets into the wrong hands, victims are vulnerable to targeted attacks both online and off, including fraud and identity theft.

Troublingly, many age verification laws don’t even protect user security by providing a private right of action to sue a company if personal data is breached or misused. This leaves you without a direct remedy should something bad happen. 

Some proponents claim that age estimation is a privacy-preserving alternative to ID-based verification. But age estimation tools still require biometric data collection, often demanding users submit a photo or video of their face to access a site. And again, once submitted, there’s no way for you to verify how that data is processed or stored. Requiring face scans also normalizes pervasive biometric surveillance and creates infrastructure that could easily be repurposed for more invasive tracking. Once we’ve accepted that accessing lawful speech requires submitting our faces for scanning, we’ve crossed a threshold that’s difficult to walk back.

Online age verification creates even bigger barriers to access.

Online age gates create more substantial access barriers than in-person ID checks do. For those concerned about privacy and security, there is no online analog to a quick visual check of your physical ID. Users may be justifiably discouraged from accessing age-gated websites if doing so means uploading personal data and creating a potentially lasting record of their visit to that site.

Given these risks, age verification also imposes barriers to remaining anonymous that don't typically exist in-person. Anonymity can be essential for those wishing to access sensitive, personal, or stigmatized content online. And users have a right to anonymity, which is “an aspect of the freedom of speech protected by the First Amendment.” Even if a law requires data deletion, users must still be confident that every website and online service with access to their data will, in fact, delete it—something that is in no way guaranteed.

In-person ID checks are additionally less likely to wrongfully exclude people due to errors. Online systems that rely on facial scans are often incorrect, especially when applied to users near the legal age of adulthood. These tools are also less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds, for users with disabilities, and for transgender individuals. This leads to discriminatory outcomes and exacerbates harm to already marginalized communities. And while in-person shoppers can speak with a store clerk if issues arise, these online systems often rely on AI models, leaving users who are incorrectly flagged as minors with little recourse to challenge the decision.

In-person interactions may also be less burdensome for adults who don’t have up-to-date ID. An older adult who forgets their ID at home or lacks current identification is not likely to face the same difficulty accessing material in a physical store, since there are usually distinguishing physical differences between young adults and those older than 35. A visual check is often enough. This matters, as a significant portion of the U.S. population does not have access to up-to-date government-issued IDs. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification.

We’re talking about First Amendment-protected speech.

It's important not to lose sight of what’s at stake here. The good or service age gated by these laws isn’t alcohol or cigarettes—it’s First Amendment-protected speech. Whether the target is social media platforms or any other online forum for expression, age verification blocks access to constitutionally-protected content. 

Access to many of these online services is also necessary to participate in the modern economy. While those without ID may function just fine without being able to purchase luxury products like alcohol or tobacco, requiring ID to participate in basic communication technology significantly hinders people’s ability to engage in economic and social life.

This is why it’s wrong to claim online age verification is equivalent to showing ID at a bar or store. This argument handwaves away genuine harms to privacy and security, dismisses barriers to access that will lock millions out of online spaces, and ignores how these systems threaten free expression. Ignoring these threats won’t protect children, but it will compromise our rights and safety.

Lisa Femia

Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.

1 week 2 days ago

Age verification laws are proliferating fast across the United States and around the world, creating a dangerous and confusing tangle of rules about what we’re all allowed to see and do online. Though these mandates claim to protect children, in practice they create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

The term “age verification” is colloquially used to describe a wide range of age assurance technologies, from age verification systems that force you to upload government ID, to age estimation tools that scan your face, to systems that infer your age by making you share personal data. While different laws call for different methods, one thing remains constant: every method out there collects your sensitive, personal information and creates barriers to accessing the internet. We refer to all of these requirements as age verification, age assurance, or age-gating.

If you’re feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you’re not alone. It’s a lot. But understanding how these mandates work and who they harm is critical to keeping yourself and your loved ones safe online. Age verification is lurking around every corner these days, so we must fight back to protect the internet that we know and love. 

That’s why today, we’re launching EFF’s Age Verification Resource Hub (EFF.org/Age): a one-stop shop to understand what these laws actually do, what’s at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and yes—safe—internet. 

Why Age Verification Mandates Are a Problem

In the U.S., more than half of all states have now passed laws imposing age-verification requirements on online platforms. Congress is considering even more at the federal level, with a recent House hearing weighing nineteen distinct proposals relating to young people’s online safety—some sweeping, some contradictory, and each one more drastic and draconian than the last.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is.

The rest of the world is moving in the same direction. We saw the UK’s Online Safety Act go into effect this summer, Australia’s new law barring access to social media for anyone under 16 goes live today, and a slew of other countries are currently considering similar restrictions.

We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is. In fact, age-gating mandates will do more harm than goodespecially for the young people they claim to protect. They undermine the fundamental speech rights of adults and young people alike; create new barriers to accessing vibrant, lawful, even life-saving content; and needlessly jeopardize all internet users’ privacy, anonymity, and security.

If legislators want to meaningfully improve online safety, they should pass a strong, comprehensive federal privacy law instead of building new systems of surveillance, censorship, and exclusion.  

What’s Inside the Resource Hub

Our new hub is built to answer the questions we hear from users every day, such as:

  • How do age verification laws actually work?
  • What’s the difference between age verification, age estimation, age assurance, and all the other confusing technical terms I’m hearing?
  • What’s at stake for me, and who else is harmed by these systems?
  • How can I keep myself, my family, and my community safe as these laws continue to roll out?
  • What can I do to fight back?
  • And if not age verification, what else can we do to protect the online safety of our young people?

Head over to EFF.org/Age to explore our explainers, user-friendly guides, technical breakdowns, and advocacy tools—all indexed in the sidebar for easy browsing. And today is just the start, so keep checking back over the next several weeks as we continue to build out the site with new resources and answers to more of your questions on all things age verification.

Join Us: Reddit AMA & EFFecting Change Livestream Events

To celebrate the launch of EFF.org/Age, and to hear directly from you how we can be most helpful in this fight, we’re hosting two exciting events:

1. Reddit AMA on r/privacy

Next week, our team of EFF activists, technologists, and lawyers will be hanging out over on Reddit’s r/privacy subreddit to directly answer your questions on all things age verification. We’re looking forward to connecting with you and hearing how we can help you navigate these changing tides, so come on over to r/privacy on Monday (12/15), Tuesday (12/16), and Wednesday (12/17), and ask us anything!

2. EFFecting Change Livestream Panel: “The Human Cost of Online Age Verification

Then, on January 15th at 12pm PT, we’re hosting a livestream panel featuring Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; Hana Memon, Software Developer at Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. Join us by RSVPing at https://livestream.eff.org/.

A Resource to Empower Users

Age-verification mandates are reshaping the internet in ways that are invasive, dangerous, and deeply unnecessary. But users are not powerless! We can challenge these laws, protect our digital rights, and build a safer digital world for all internet users, no matter their ages. Our new resource hub is here to help—so explore, share, and join us in the fight for a better internet.

Molly Buckley

The Best Big Media Merger Is No Merger at All

1 week 2 days ago

The state of streaming is... bad. It’s very bad. The first step in wanting to watch anything is a web search: “Where can I stream X?” Then you have to scroll past an AI summary with no answers, and then scroll past the sponsored links. After that, you find out that the thing you want to watch was made by a studio that doesn’t exist anymore or doesn’t have a streaming service. So, even though you subscribe to more streaming services than you could actually name, you will have to buy a digital copy to watch. A copy that, despite paying for it specifically, you do not actually own and might vanish in a few years. 

Then, after you paid to see something multiple times in multiple ways (theater ticket, VHS tape, DVD, etc.), the mega-corporations behind this nightmare will try to get Congress to pass laws to ensure you keep paying them. In the end, this is easier than making a product that works. Or, as someone put it on social media, these companies have forgotten “that their entire existence relies on being slightly more convenient than piracy.” 

It’s important to recognize this as we see more and more media mergers. These mergers are not about quality, they’re about control. 

In the old days, studios made a TV show. If the show was a hit, they increased how much they charged companies to place ads during the show. And if the show was a hit for long enough, they sold syndication rights to another channel. Then people could discover the show again, and maybe come back to watch it air live. In that model, the goal was to spread access to a program as much as possible to increase viewership and the number of revenue streams.  

Now, in the digital age, studios have picked up a Silicon Valley trait: putting all their eggs into the basket of “increasing the number of users.” To do that, they have to create scarcity. There has to be only one destination for the thing you’re looking for, and it has to be their own. And you shouldn’t be able to control the experience at all. They should.  

They’ve also moved away from creating buzzy new exclusives to get you to pay them. That requires risk and also, you know, paying creative people to make them. Instead, they’re consolidating.  

Media companies keep announcing mergers and acquisitions. They’ve been doing it for a long time, but it’s really ramped up in the last few years. And these mergers are bad for all the obvious reasons. There are the speech and censorship reasons that came to a head in, of all places, late night television. There are the labor issues. There are the concentration of power issues. There are the obvious problems that the fewer studios that exist the fewer chances good art gets to escape Hollywood and make it to our eyes and ears. But when it comes specifically to digital life there are these: consumer experience and ownership.  

First, the more content that comes under a single corporation’s control, the more they expect you to come to them for it. And the more they want to charge. And because there is less competition, the less they need to work to make their streaming app usable. They then enforce their hegemony by using the draconian copyright restrictions they’ve lobbied for to cripple smaller competitors, critics, and fair use.  

When everything is either Disney or NBCUniversal or Warner Brothers-Discovery-Paramount-CBS and everything is totally siloed, what need will they have to spend money improving any part of their product? Making things is hard, stopping others from proving how bad you are is easy, thanks to how broken copyright law is.  

Furthermore, because every company is chasing increasing subscriber numbers instead of multiple revenue streams, they have an interest in preventing you from ever again “owning” a copy of a work. This was always sort of part of the business plan, but it was on a scale of a) once every couple of years,  b) at least it came, in theory, with some new features or enhanced quality and c) you actually owned the copy you paid for. Now they want you to pay them every month for access to same copy. And, hey, the price is going to keep going up the fewer options you have. Or you will see more ads. Or start seeing ads where there weren’t any before.  

On the one hand, the increasing dependence on direct subscriber numbers does give users back some power. Jimmy Kimmel’s reinstatement by ABC was partly due to the fact that the company was about to announce a price hike for Disney+ and it couldn’t handle losing users due to the new price and due to popular outrage over Kimmel’s treatment.  

On the other hand, well, there's everything else. 

The latest kerfuffle is over the sale of Warner Brothers-Discovery, a company that was already the subject of a sale and merger resulting in the hyphen. Netflix was competiing against another recently merged media megazord of Paramount Skydance.  

Warner Brothers-Discovery accepted a bid from Netflix, enraging Paramount Skydance, which has now launched a hostile takeover.  

Now the optimum outcome is for neither of these takeovers to happen. There are already too few players in Hollywood. It does nothing for the health of the industry to allow either merger. A functioning antitrust regime would stop both the sale and the hostile takeover attempt, full stop. But Hollywood and the federal government are frequent collaborators, and the feds have little incentive to stop Hollywood’s behemoths from growing even further, as long as they continue to play their role pushing a specific view of American culture.    

The promise of the digital era was in part convenience. You never again had to look at TV listings to find out when something would be airing. Virtually unlimited digital storage meant everything would be at your fingertips. But then the corporations went to work to make sure it never happened. And with each and every merger, that promise gets further and further away.  

Note 12/10/2025: One line in this blog has been modified a few hours post-publication. The substance remains the same. 

Katharine Trendacosta

EFF Launches Age Verification Hub as Resource Against Misguided Laws

1 week 2 days ago
EFF Also Will Host a Reddit AMA and a Livestreamed Panel Discussion

SAN FRANCISCO—With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help people fight back. 

To mark the hub's launch, EFF will host a Reddit AMA (“Ask Me Anything”) next week and a free livestreamed panel discussion on January 15 highlighting the dangers of these misguided laws. 

“These restrictive mandates strike at the foundation of the free and open internet,” said EFF Activist Molly Buckley. “While they are wrapped in the legitimate concern about children's safety, they operate as tools of censorship, used to block people young and old from viewing or sharing information that the government deems ‘harmful’ or ‘offensive.’ They also create surveillance systems that critically undermine online privacy, and chill access to vital online communities and resources. Our new resource hub is a one-stop shop for information that people can use to fight back and redirect lawmakers to things that will actually help young people, like a comprehensive privacy law.” 

Half of U.S. states have enacted some sort of online age verification law. At the federal level, a House Energy and Commerce subcommittee last week held a hearing on “Legislative Solutions to Protect Children and Teens Online.” While many of the 19 bills on that hearing’s agenda involve age verification, none would truly protect children and teens. Instead, they threaten to make it harder to access content that can be crucial, even lifesaving, for some kids

It’s not just in the U.S.  Effective this week, a new Australian law requires social media platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. 

We all want young people to be safe online. However, age verification is not the panacea that regulators and corporations claim it to be; in fact, it could undermine the safety of many. 

Age verification laws generally require online services to check, estimate, or verify all users’ ages—often through invasive tools like government ID checks, biometric scans, or other dubious “age estimation” methods—before granting them access to certain online content or services. These methods are often inaccurate and always privacy-invasive, demanding that users hand over sensitive and immutable personal information that links their offline identity to their online activity. Once that valuable data is collected, it can easily be leaked, hacked, or misused.  

To truly protect everyone online, including children, EFF advocates for a comprehensive data privacy law. 

EFF will host a Reddit AMA on r/privacy from Monday, Dec. 15 at 12 p.m. PT through Wednesday, Dec. 17 at 5 p.m. PT, with EFF attorneys, technologists, and activists answering questions about age verification on all three days. 

EFF will host a free livestream panel discussion about age verification at 12 p.m. PDT on Thursday, Jan. 15. Panelists will include Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; a representative of Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. RSVP at https://www.eff.org/livestream-age

For the age verification resource hub: https://www.eff.org/age 

For the Reddit AMA: https://www.reddit.com/r/privacy/  

For the Jan. 15 livestream: https://www.eff.org/livestream-age  

 

Tags: age verificationage estimationage gatingContact:  MollyBuckleyActivistmollybuckley@eff.org
Josh Richman

Age Assurance Methods Explained

1 week 2 days ago

This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.

EFF is against all mandatory age verification. Not only does it turn the internet into an age-gated cul-de-sac, but it also leaves behind many people who can’t get or don’t have proper and up-to-date documentation. While populations like undocumented immigrants and people experiencing homelessness are more obviously vulnerable groups, these restrictions also impact people with more mundane reasons for not having valid documentation on hand. Perhaps they’ve undergone life changes that impact their status or other information—such as a move, name change, or gender marker change—or perhaps they simply haven’t gotten around to updating their documents. Inconvenient events like these should not be a barrier to going online. People should also reserve the right to opt-out of unreliable technology and shady practices that could endanger their personal information.

But age restriction mandates threaten all of that. Not only do age-gating laws block adults and youth alike from freely accessing services on the web, they also force users to trade their anonymity—a pillar of online expression—for a system in which they are bound to their real-life identities. And this surveillance regime stretches beyond just age restrictions on certain content; much of this infrastructure is also connected to government plans for creating a digital system of proof of identity.

So how does age gating actually work? The age and identity verification industry has devised countless different methods platforms can purchase to—in theory—figure out the ages and/or identities of their users.  But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.

Every system of age verification or age estimation demands that users hand over sensitive and oftentimes immutable personal information that links their offline identity to their online activity, risking their safety and security in the process.

But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.

With that said, as we see more of these laws roll out across the U.S. and the rest of the world, it’s important to understand the differences between these technologies so you can better identify the specific risks of each method, and make smart decisions about how you share your own data.

Age Assurance Methods

There are many different technologies that are being developed, attempted, and deployed to establish user age. In many cases, a single platform will have implemented a mixture of methods. For example, a user may need to submit both a physical government ID and a face scan as part of a liveliness check to establish that they are the person pictured on the physical ID. 

Age assurance methods generally fall into three categories:

  1. Age Attestation
  2. Age Estimation
  3. ID-bound Proof
Age Attestation Self-attestation 

Sometimes, you’ll be asked to declare your age, without requiring any form of verification. One way this might happen is through one-off self-attestation. This type of age attestation has been around for a while; you may have seen it when an alcohol website asks if you’re over 21, or when Steam asks you to input your age to view game content that may not be appropriate for all ages. It’s usually implemented as a pop-up on a website, and they might ask you for your age every time you enter, or remember it between site accesses. This sort of attestation provides an indication that the site may not be appropriate for all viewers, but gives users the autonomy and respect to make that decision for themselves.

An alternative proposed approach to declaring your own age, called device-bound age attestation, is to have you set your age on your operating system or on App Stores before you can make purchases or browse the web. This age or age range might then be shared with websites or apps. On an Apple device, that age can be modified after creation, as long as an adult age is chosen. It’s important to separate device-bound age attestation from methods that require age verification or estimation at the device or app store level (common to digital ID solutions and some proposed laws). It’s only attestation if you’re permitted to set your age to whatever you choose without needing to prove anything to your provider or another party—providing flexibility for age declaration outside of mandatory age verification.

Attestation through parental controls

The sort of parental controls found on Apple and Android devices, Windows computers, and video game consoles provide the most flexible way for parents to manage what content their minor children can access. These settings can be applied through the device operating system, third-party applications, or by establishing a child account. Decisions about what content a young person can access are made via consent-driven mechanisms. As the manager, the parent or guardian will see requests and activity from their child depending on how strict or lax the settings are set. This could include requests to install an app, make a purchase on an app store, communicate with a new contact, or browse a particular website. The parent or guardian can then choose whether or not to accept the request and allow the activity. 

One survey that collected answers from 1,000 parents found that parental controls are underutilized. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. To help encourage more parents to make use of these settings, companies should continue to make them clearer and easier to use and manage. Parental controls are better suited to accommodating diverse cultural contexts and individual family concerns than a one-size-fits-all government mandate. It’s also safer to use native settings–or settings provided by the operating system itself–than it is to rely on third-party parental control applications. These applications have experienced data breaches and often effectively function as spyware.

Age Estimation

Instead of asking you directly, the system guesses your age based on data it collects about you.

Age estimation through photo and facial estimation

Age estimation by photo or live facial age analysis is when a system uses an image of a face to guess a person’s age.

A poorly designed system might improperly store these facial images or retain them for significant periods, creating a risk of data leakage. Our faces are unique, immutable, and constantly on display. In the hands of an adversary, and cross-referenced to other readily available information about us, this information can expose intimate details about us or lead to biometric tracking.

This technology has also proven fickle and often inaccurate, causing false negatives and positives, exacerbation of racial biases, and unprotected usage of biometric data to complete the analysis. And because it’s usually conducted with AI models, there often isn’t a way for a user to challenge a decision directly without falling back on more intrusive methods like submitting a government ID. 

Age inference based on user data and third party services

Age inference systems are normally conducted through estimating how old someone is based on their account information or querying other databases, where the account may have done age verification already, to cross reference with the existing information they have on that account.

Age inference includes but not limited to:

In order to view how old someone is via account information associated with their email, services often use data brokers to provide this information. This incentivizes even more collection of our data for the sake of age estimation and rewards data brokers for collecting a mass of data on people. Also, regulation of these age inference services varies based on a country’s privacy laws.

ID-bound Proof

ID-bound proofs, methods that use your government issued ID, are often used as a fallback for failed age estimation. Consequently, any government-issued ID backed verification disproportionately excludes certain demographics from accessing online services. A significant portion of the U.S. population does not have access to government-issued IDs, with millions of adults lacking a valid driver’s license or state-issued ID. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification. In addition, non-U.S. citizens, including undocumented immigrants, face barriers to acquiring government-issued IDs. The exclusionary nature of document-based verification systems is a major concern, as it could prevent entire communities from accessing essential services or engaging in online spaces.

Physical ID uploaded and stored as an image 

When an image of a physical ID is required, users are forced to upload—not just momentarily display—sensitive personal information, such as government-issued ID or biometric identifiers, to third-party services in order to gain access to age-restricted content. This creates significant privacy and security concerns, as users have no direct control over who receives and stores their personal data, where it is sent, and how it may be accessed, used, or leaked outside the immediate verification process.

Requiring users to digitally hand over government-issued identification to verify their age introduces substantial privacy risks. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee that it will be handled securely. The verification process typically involves transmitting this data across multiple intermediaries, which means the risk of a data breach is heightened. The misuse of sensitive personal data, such as government IDs, has been demonstrated in numerous high-profile cases, including the breach of the age verification company AU10TIX, which exposed login credentials for over a year, and the hack of the messaging application Discord. Justifiable privacy and security concerns may chill users from accessing platforms they are lawfully entitled to access.

Device-bound digital ID

Device-bound digital ID is a credential that is locally stored on your device. This comes in the form of government or privately-run wallet applications, like those offered by Apple and Google. Digital IDs are subject to a higher level of security within the Google and Apple wallets (as they should be). This means they are not synced to your account or across services. If you lose the device, you will need to reissue a new credential to the new one. Websites and services can directly query your digital ID to reveal only certain information from your ID, like age range, instead of sharing all of your information. This is called “selective disclosure."

There are many reasons someone may not be able to acquire a digital ID, preventing them from relying on this option. This includes lack of access to a smartphone, sharing devices with another person, or inability to get a physical ID. No universal standards exist governing how ID expiration, name changes, or address updates affect the validity of digital identity credentials. How to handle status changes is left up to the credential issuer.

Asynchronous and Offline Tokens

This is an issued token of some kind that doesn’t necessarily need network access to an external party or service every time you use it to establish your age with a verifier when they ask. A common danger in age verification services is the proliferation of multiple third-parties and custom solutions, which vary widely in their implementation and security. One proposal to avoid this is to centralize age checks with a trusted service that provides tokens that can be used to pass age checks in other places. Although this method requires a user to still submit to age verification or estimation once, after passing the initial facial age estimation or ID check, a user is issued a digital token they can present later to to show that they've previously passed an age check. The most popular proposal, AgeKeys, is similar to passkeys in that the tokens will be saved to a device or third-party password store, and can then be easily accessed after unlocking with your preferred on-device biometric verification or pin code.

Lessons Learned

With lessons pulled from the problems with the age verification rollout in the UK and various U.S. states, age verification widens risk for everyone by presenting scope creep and blocking web information access. Privacy-preserving methods to determine age exist such as presenting an age threshold instead of your exact birth date, but have not been mass deployed or stress tested yet. Which is why policy safeguards around the deployed technology matter just as much, if not more. 

Much of the infrastructure around age verification is entangled with other mandates, like deployment of digital ID. Which is why so many digital offerings get coupled with age verification as a “benefit” to the holder. In reality it’s more of a plus for the governments that want to deploy mandatory age verification and the vendors that present their implementation that often contains multiple methods. Instead of working on a singular path to age-gate the entire web, there should be a diversity of privacy-preserving ways to attest age without locking everyone into a singular platform or method. Ultimately, offering multiple options rather than focusing on a single method that would further restrict those who can’t use that particular path.

Alexis Hancock

EFF Benefit Poker Tournament at DEF CON 33

1 week 3 days ago

In the brand new Planet Hollywood Poker Room, 48 digital rights supporters played No-Limit Texas Hold’Em in the 4th Annual EFF Benefit Poker Tournament at DEF CON, raising $18,395 for EFF.

img_5930.jpg

The tournament was hosted by EFF board member Tarah Wheeler and emceed by lintile, lending his Hacker Jeopardy hosting skills to help EFF for the day.

img_5980_copy.jpg

Every table had two celebrity players with special bounties for the player that knocked them out of the tournament. This year featured Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams, Bryson Bort, Kym “KymPossible” Price, Adam Shostack, and Dr. Allan Friedman.

img_1962.jpg

img131959.jpeg

Excellent poker player and teacher Jason Healey, Professor of International Affairs at Columbia University’s School of International and Public Affairs noted that “the EFF poker tournament is where you find all the hacker royalty in one room."

img_5955-fs8-web.jpg

The day started with a poker clinic run by Tarah’s father, professional poker player Mike Wheeler. The hour-long clinic helped folks get brushed up on their casino literacy before playing the big game.

Mike told the story of first teaching Tarah to play poker with jellybeans when she was only four. He then taught poker noobs how to play and when to check, when to fold, and when to go all-in.

img_5978.jpg

After the clinic, lintile roused the crowd to play for real, starting the tournament off by announcing “Shuffle up and deal!”

The first hour saw few players get knocked out, but after the blinds began to rise, the field began to thin, with a number of celebrity knock outs.
At every knockout, lintile took to the mic to encourage the player to donate to EFF, which allowed them to buy back into the tournament and try their luck another round.

kym.jpg

Jay Salzberg knocked out Kym Price to win a l33t crate.

img_6019.jpg
img_5923.jpg

Kim Holt knocked out Mike Wheeler, collecting the bounty on his head posted by Tarah, and winning a $250 donation to EFF in his name. This is the second time Holt has sent Mike home.

mike_knock_out.jpg

Tarah knocked out Adam Shostack, winning a number of fun prizes, including a signed copy of his latest book, Threats: What Every Engineer Should Learn From Star Wars.

adam.jpg

Bryson Bort was knocked out by privacy attorney Marcia Hofmann.

img_6082-web.jpg

Play continued for three hours until only the final table of players remained: Allan Friedman, Luke Hanley, Jason Healey, Kim Holt, Igor Ignatov, Sid, Puneet Thapliyal, Charles Thomas and Tarah Wheeler herself.

As blinds continues to rise, players went all-in more and more. The most exciting moment was won by Sid, tripling up with TT over QT and A8s, and then only a few hands later knocking out Tarah, who finished 8th.

For the first time, the Jellybean Trophy sat on the final table awaiting the winner. This year, it was a Seattle Space Needle filled with green and blue jellybeans celebrating the lovely Pacific Northwest where Tarah and Mike are from.

The final three players were Allen Friedman, Kim Holt and Sid. Sid doubled up with KJ over Holt’s A6, and then knocked Holt out with his Q4 beating Holt’s 22.

Friedman and Sid traded blinds until Allan went all in with A6 and Sid called with JT. A jack landed on the flop and Sid won the day!

img_5987.jpg

img_6115.jpg

img_6126.jpg

Sid becomes the first player to win the tournament more than once, taking home the jellybean trophy two years in a row.

img_6139.jpg

It was an exciting afternoon of competition raising over $18,000 to support civil liberties and human rights online. We hope you join us next year as we continue to grow the tournament. Follow Tarah and EFF to make sure we have chips and a chair for you at DEF CON 34.

Be ready for this next year’s special benefit poker event: The Digital Rights Attack Lawyers Edition! Our special celebrity guests will all be our favorite digital rights attorneys including Cindy Cohn, Marcia Hofmann, Kurt Opsahl, and more!

Photo Gallery

Daniel de Zeeuw

10 (Not So) Hidden Dangers of Age Verification

1 week 4 days ago

It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various requirements to verify your age before you can create a social media account.

Age-verification laws may sound straightforward to some: protect young people online by making everyone prove their age. But in reality, these mandates force users into one of two flawed systems—mandatory ID checks or biometric scans—and both are deeply discriminatory. These proposals burden everyone’s right to speak and access information online, and structurally excludes the very people who rely on the internet most. In short, although these laws are often passed with the intention to protect children from harm, the reality is that these laws harm both adults and children. 

Here’s who gets hurt, and how: 

   1.  Adults Without IDs Get Locked Out

Document-based verification assumes everyone has the right ID, in the right name, at the right address. About 15 million adult U.S. citizens don’t have a driver’s license, and 2.6 million lack any government-issued photo ID at all. Another 34.5 million adults don't have a driver's license or state ID with their current name and address.

Specifically:

  • 18% of Black adults don't have a driver's license at all.
  • Black and Hispanic Americans are disproportionately less likely to have current licenses.
  • Undocumented immigrants often cannot obtain state IDs or driver's licenses.
  • People with disabilities are less likely to have current identification.
  • Lower-income Americans face greater barriers to maintaining valid IDs.

Some laws allow platforms to ask for financial documents like credit cards or mortgage records instead. But they still overlook the fact that nearly 35% of U.S. adults also don't own homes, and close to 20% of households don't have credit cards. Immigrants, regardless of legal status, may also be unable to obtain credit cards or other financial documentation.

   2.  Communities of Color Face Higher Error Rates

Platforms that rely on AI-based age-estimation systems often use a webcam selfie to guess users’ ages. But these algorithms don’t work equally well for everyone. Research has consistently shown that they are less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds; that they often misclassify those adults as being under 18; and sometimes take longer to process, creating unequal access to online spaces. This mirrors the well-documented racial bias in facial recognition technologies. The result is that technology’s inherent biases can block people from speaking online or accessing others’ speech.

   3.  People with Disabilities Face More Barriers

Age-verification mandates most harshly affect people with disabilities. Facial recognition systems routinely fail to recognize faces with physical differences, affecting an estimated 100 million people worldwide who live with facial differences, and “liveness detection” can exclude folks with limited mobility. As these technologies become gatekeepers to online spaces, people with disabilities find themselves increasingly blocked from essential services and platforms with no specified appeals processes that account for disability.

Document-based systems also don't solve this problem—as mentioned earlier, people with disabilities are also less likely to possess current driver's licenses, so document-based age-gating technologies are equally exclusionary.

   4.  Transgender and Non-Binary People Are Put At Risk

Age-estimation technologies perform worse on transgender individuals and cannot classify non-binary genders at all. For the 43% of transgender Americans who lack identity documents that correctly reflect their name or gender, age verification creates an impossible choice: provide documents with dead names and incorrect gender markers, potentially outing themselves in the process, or lose access to online platforms entirely—a risk that no one should be forced to take just to use social media or access legal content.

   5.  Anonymity Becomes a Casualty

Age-verification systems are, at their core, surveillance systems. By requiring identity verification to access basic online services, we risk creating an internet where anonymity is a thing of the past. For people who rely on anonymity for safety, this is a serious issue. Domestic abuse survivors need to stay anonymous to hide from abusers who could track them through their online activities. Journalists, activists, and whistleblowers regularly use anonymity to protect sources and organize without facing retaliation or government surveillance. And in countries under authoritarian rule, anonymity is often the only way to access banned resources or share information without being silenced. Age-verification systems that demand government IDs or biometric data would strip away these protections, leaving the most vulnerable exposed.

   6.  Young People Lose Access to Essential Information 

Because state-imposed age-verification rules either block young people from social media or require them to get parental permission before logging on, they can deprive minors of access to important information about their health, sexuality, and gender. Many U.S. states mandate “abstinence only” sexual health education, making the internet a key resource for education and self-discovery. But age-verification laws can end up blocking young people from accessing that critical information. And this isn't just about porn, it’s about sex education, mental health resources, and even important literature. Some states and countries may start going after content they deem “harmful to minors,” which could include anything from books on sexual health to art, history, and even award-winning novels. And let’s be clear: these laws often get used to target anything that challenges certain political or cultural narratives, from diverse educational materials to media that simply includes themes of sexuality or gender diversity. What begins as a “protection” for kids could easily turn into a full-on censorship movement, blocking content that’s actually vital for minors’ development, education, and well-being. 

This is also especially harmful to homeschoolers, who rely on the internet for research, online courses, and exams. For many, the internet is central to their education and social lives. The internet is also crucial for homeschoolers' mental health, as many already struggle with isolation. Age-verification laws would restrict access to resources that are essential for their education and well-being.

   7.  LGBTQ+ Youth Are Denied Vital Lifelines

For many LGBTQ+ young people, especially those with unsupportive or abusive families, the internet can be a lifeline. For young people facing family rejection or violence due to their sexuality or gender identity, social media platforms often provide crucial access to support networks, mental health resources, and communities that affirm their identities. Age verification systems that require parental consent threaten to cut them from these crucial supports. 

When parents must consent to or monitor their children's social media accounts, LGBTQ+ youth who lack family support lose these vital connections. LGBTQ+ youth are also disproportionately likely to be unhoused and lack access to identification or parental consent, further marginalizing them. 

   8.  Youth in Foster Care Systems Are Completely Left Out

Age verification bills that require parental consent fail to account for young people in foster care, particularly those in group homes without legal guardians who can provide consent, or with temporary foster parents who cannot prove guardianship. These systems effectively exclude some of the most vulnerable young people from accessing online platforms and resources they may desperately need.

   9.  All of Our Personal Data is Put at Risk

An age-verification system also creates acute privacy risks for adults and young people. Requiring users to upload sensitive personal information (like government-issued IDs or biometric data) to verify their age creates serious privacy and security risks. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store, for example. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Once uploaded, this personal information could be exposed, mishandled, or even breached, as we've seen with past data hacks. Age-verification systems are no strangers to being compromised—companies like AU10TIX and platforms like Discord have faced high-profile data breaches, exposing users’ most sensitive information for months or even years. 

The more places personal data passes through, the higher the chances of it being misused or stolen. Users are left with little control over their own privacy once they hand over these immutable details, making this approach to age verification a serious risk for identity theft, blackmail, and other privacy violations. Children are already a major target for identity theft, and these mandates perversely increase the risk that they will be harmed.

   10.  All of Our Free Speech Rights Are Trampled

The internet is today’s public square—the main place where people come together to share ideas, organize, learn, and build community. Even the Supreme Court has recognized that social media platforms are among the most powerful tools ordinary people have to be heard.

Age-verification systems inevitably block some adults from accessing lawful speech and allow some young people under 18 users to slip through anyway. Because the systems are both over-inclusive (blocking adults) and under-inclusive (failing to block people under 18), they restrict lawful speech in ways that violate the First Amendment. 

The Bottom Line

Age-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.

The internet is essential to how people speak, learn, and participate in public life. When access depends on flawed technology or hard-to-obtain documents, we don’t just inconvenience users, we deepen existing inequalities and silence the people who most need these platforms. As outlined, every available method—facial age estimation, document checks, financial records, or parental consent—systematically excludes or harms marginalized people. The real question isn’t whether these systems discriminate, but how extensively.

Rindala Alajaji

EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

2 weeks 1 day ago

The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.

The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape.

It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy.

Let’s break it down. 

 Changing What Constitutes Personal Data 

 The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.  

 The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.  

 The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue. 

 This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.

Privileging AI 

In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.  

 Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission:  “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.

Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.  

And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it. 

A Broad Reform Beyond the GDPR

There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here.  

Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks” of other core legislation, which suggests it is eyeing other mandates.

Browser Signals and Cookie Fatigue

There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights. 

While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later. 

Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on. 

However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps. 

Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected. 

A Muddled Legal Landscape

The Commission’s use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.

The result is the opposite of  “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like "complification” rather than simplification.

The Digital Package Is Not a Done Deal

Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection. 

Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern, the kind of well-meant fix that ends up resembling the infamous ecce homo restoration

Christoph Schmon

Axon Tests Face Recognition on Body-Worn Cameras

2 weeks 2 days ago

Axon Enterprise Inc. is working with a Canadian police department to test the addition of face recognition technology (FRT) to its body-worn cameras (BWCs). This is an alarming development in government surveillance that should put communities everywhere on alert. 

As many as 50 officers from the Edmonton Police Department (EPD) will begin using these FRT-enabled BWCs today as part of a proof-of-concept experiment. EPD is the first police department in the world to use these Axon devices, according to a report from the Edmonton Journal

This kind of technology could give officers instant identification of any person that crosses their path. During the current trial period, the Edmonton officers will not be notified in the field of an individual’s identity but will review identifications generated by the BWCs later on. 

“This Proof of Concept will test the technology’s ability to work with our database to make officers aware of individuals with safety flags and cautions from previous interactions,” as well as “individuals who have outstanding warrants for serious crime,” Edmonton Police described in a press release, suggesting that individuals will be placed on a watchlist of sorts.

FRT brings a rash of problems. It relies on extensive surveillance and collecting images on individuals, law-abiding or otherwise. Misidentifications can cause horrendous consequences for individuals, including prolonged and difficult fights for innocence and unfair incarceration for crimes never committed. In a world where police are using real-time face recognition, law-abiding individuals or those participating in legal, protected activity that police may find objectionable — like protest — could be quickly identified. 

With the increasing connections being made between disparate data sources about nearly every person, BWCs enabled with FRT can easily connect a person minding their own business, who happens to come within view of a police officer, with a whole slew of other personal information. 

Axon had previously claimed it would pause the addition of face recognition to its tools due to concerns raised in 2019 by the company’s AI and Policing Technology Ethics Board. However, since then, the company has continued to research and consider the addition of FRT to its products. 

This BWC-FRT integration signals possible other FRT integrations in the future. Axon is building an entire arsenal of cameras and surveillance devices for law enforcement, and the company grows the reach of its police surveillance apparatus, in part, by leveraging relationships with its thousands of customers, including those using its flagship product, the Taser. This so-called “ecosystem” of surveillance technologyq includes the Fusus system, a platform for connecting surveillance cameras to facilitate real-time viewing of video footage. It also involves expanding the use of surveillance tools like BWCs and the flying cameras of “drone as first responder” (DFR) programs.

Face recognition undermines individual privacy, and it is too dangerous when deployed by police. Communities everywhere must move to protect themselves and safeguard their civil liberties, insisting on transparency, clear policies, public accountability, and audit mechanisms. Ideally, communities should ban police use of the technology altogether. At a minimum, police must not add FRT to BWCs.

Beryl Lipton

After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

2 weeks 2 days ago

After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption

Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection. 

With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline. 

Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.

For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.

Christoph Schmon
Checked
1 hour 59 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed