UK Online Safety Bill Will Mandate Dangerous Age Verification for Much of the Web

2 weeks 2 days ago

This blog post was co-written by Dr. Monica Horten, and is also available on the Open Rights Group website.

Under new age verification rules in the UK’s massive Online Safety Bill, all internet platforms with UK users will have to stop minors from accessing ‘harmful’ content, as defined by the UK Parliament. This will affect adult websites, but also user-to-user services – basically any site, platform, or app that allows user-generated content that could be accessed by young people. To prevent minors from accessing ‘harmful’ content, sites will have to verify the age of visitors, either by asking for government-issued documents or using biometric data, such as face scans, to estimate their age.

This will result in an enormous shift in the availability of information online, and pose a serious threat to the privacy of UK internet users. It will make it much more difficult for all users to access content privately and anonymously, and it will make many of the most popular websites and platforms liable if they do not block, or heavily filter, content for anyone who does not verify their age. This is in addition to the dangers the Bill poses to encryption.

This will result in an enormous shift in the availability of information online

The details of the law’s implementation have been left to the UK’s regulation agency, the Office of Communications (Ofcom), but the Bill is vague on the details of this. Social media and other sites, where users regularly engage with each other’s content, will have to determine the risk of minors using their site, and block their access to any content that the government has described as ‘harmful’. Platforms like Facebook and TikTok, and even community-based sites like Wikipedia, will have to choose between conducting age checks on all users – a potentially expensive, and privacy-invasive process – or sanitising their entire sites. That’s why Wikimedia has come out strongly against the Bill, writing that in its “attempt to weed out the worst parts of the internet, the Online Safety Bill actually jeopardises the best parts of the internet”.

Providers of pornographic or ‘adult only’ services will, of course, have no choice except to impose age verification to exactly identify the age of the user and not allow under-age users onto their site at all.

The government’s list of content that is harmful for children includes violent content and content relating to eating disorders, suicide and even animals fighting. This list will be enshrined in law, but contains no further definition, leaving it open to misinterpretation. It is impossible for a large platform to make case-by-case decisions about which content is harmful. For example, a post which describes a person overcoming such a disorder, a post describing necessary health information and advice about the topic, and a post explaining how much weight a person lost as a result of an eating disorder could all be described as eating disorder-related content that is ‘harmful’. As a result, services will be forced to over censor to ensure young people – and possibly, all users, if they aren’t sure which users are minors – don’t encounter any content on these topics at all. Site operators will undoubtedly be liable for errors, and many sites will require over-zealous moderation to ensure they are complying, resulting in lawful and harmless content being censored.

This leaves only a few options for platforms, services, and apps with UK users, and all of them lead to a less open, less functional, and less free Internet. Platforms will face criminal penalties for failing to comply and may choose to block young people – including those as old as seventeen – entirely. They may filter and moderate enormous amounts of content to allow young people on the site without age verification. They may filter and moderate enormous amounts of content for young people only, while allowing age-verified users access to all content. Or, they could exclude UK users entirely, rather than risk liability and the cost of expensive and untried age estimation systems and content moderation.

Whilst the policy aim is well-intentioned, the result will be dangerous. The requirement to age-gate will trump the balancing of rights. It risks a disproportionate interference with children’s and adult’s right to access information, and their freedom of expression rights.

Which Sites Will Be Affected? 

The Bill primarily covers two types of sites: web services that solely exist to publish and sell access to pornographic content, and user-to-user services which allow users to post their own content. These platforms may carry limited amounts of pornographic or ‘harmful’ content – because user-generated content is impossible to moderate at scale – but clearly that is not their primary purpose.

Pornographic websites will have to prevent under 18s obtaining any site access at all. Social media platforms and other sites that contain user-generated content, on the other hand, will have to assess the risk of children using their service, and the risk of content defined as harmful to children being on their site. They will have to block children from being able to access content defined as harmful. This includes pornography, but the full list encompasses a much wider range of content (see below).

Adult and Pornographic Websites 

Pornography websites that have UK users, or target UK users, will be required to use age verification to ensure that children are not able to encounter their content. Age verification is, essentially, identity verification, which makes it effectively impossible to browse pornographic sites anonymously, and creates the risk of data breaches and the potential for data to be collected and potentially shared or sold. Data protection laws apply, although little guidance exists in the Bill about compliance. Ofcom is responsible for determining the measures and policies sites should implement, and the principles that will be applied to determine compliance [S.83]. The Bill does explain that sites should “have regard for the importance of protecting UK users from a breach of any statutory provision or rule of law concerning privacy that is relevant to the use of operation” of the service. Privacy should be paramount in a bill like this, but for now, how exactly that will happen has been left to Ofcom.

Social Media Platforms

Social media platforms which allow minor users will be mandated to deploy technical solutions to check the age of users before serving content. This is clear from S.12, ‘Safety Duties protecting Children’.

Online platforms must prevent children of any age encountering “primary priority content harmful to children,” [S.12 (3a)] and to “protect children in age groups judged to be at risk of harm from other content that is harmful to children (or from a particular kind of such content) from encountering it by means of the service [S.12 (3,b)]. Platforms also now have to consider how to protect children from “features, functionalities or behaviours enabled or created by the design or operation of the service” [S12 (3,C)].

Platforms will also have to conduct a risk assessment to explain how they will address children of any age and those in age groups judged to be at risk of harm [S.11 (6)]. They are expected to comply using age assurance, age verification or age estimation [S.12(4), 12(6) and S.12(7)]. Age estimation likely involves estimating age based on biometric data – essentially, using an algorithm to scan a photo or video of the user.

What Content is Covered? 

The Bill describes two types of content: primary priority content and priority content. But there’s little relevant distinction in practice. Children must be “prevented” from access to primary priority content, which suggests they must be blocked from accessing it at all times, whereas children should be “protected” from coming across priority content, but the measures required are the same. The Bill does not explain the distinction between “prevent” and “protect” in this context.

“Primary priority content” has been confirmed in the law. The list specifies pornographic content, but also includes content encouraging, promoting or providing instructions for suicide, self-harm (including poisoning) and eating disorders. [S.61] Priority content is anything depicting violence against people or animals (including fictional animals) [S62 (14)], bullying content, abusive content related to a number of protected characteristics, content that promotes dangerous stunts (such as the cinnamon challenge), and content which encourages people to “ingest, inject, inhale or in any other way self-administer” a physically harmful substance, or any substance in quantities which would be harmful [S62.9].

How Will Age Verification Work? 

Age verification is defined as any measure to verify the exact age of a user. In practice, there are two types of verification. The first, commonly called age verification, usually involves confirming a user matches with government issued identification. The second is age estimation, a measure intended to estimate the age or age range of a user based on their appearance. Self-declaration will not be accepted for compliance purposes. Providers will have to design their services to take account of the needs of children of different ages, and ensure that there are adequate controls over the use of their service by children [S. 7(4)]. They can only conclude that children cannot access their services by implementing age verification in such a way that children cannot normally access the service [S.12].

Compliance will be compulsory unless the terms of service of the platform explicitly prohibit the content that is being addressed.

Providers will have to choose systems that are “highly effective at correctly determining whether or not a particular user is a child” [S12 (6)]. Providers can even be required to distinguish between children of different ages, for the purpose of determining whether they can be permitted to access certain content.

There is no privacy-protective age estimation or verification process currently in existence that functions accurately for all users. France’s National Commission on Informatics and Liberty (CNIL) published a detailed analysis of current age verification and assurance methods. It found that no method has the following three important elements: “sufficiently reliable verification, complete coverage of the population, and respect for the protection of individuals’ data and privacy and their security.” In short, every age verification method has significant flaws.

These systems will collect data, particularly biometric data. This carries significant privacy risks, and there is little clarity in the Bill about how websites will be expected to mitigate these risks. It also carries risks of incorrect blocking where children or adults would be locked out of content by an erroneous estimate of their age. This risk is recognised by the inclusion of a requirement for providers to consider complaints by users whose age has been incorrectly estimated [S 32 (5)(D)].

Ofcom could minimise the damage of this Bill, as they are required to produce a code of practice on age assurance. The first principle that Ofcom should adopt is that the age assurance or age verification systems should be effective at correctly identifying the age or age-range of users, and that competition of provider should exist so users with a concern for privacy and security can opt for their chosen provider. The pressure will be on Ofcom to ensure that platforms implement age verification or age assurance, and this will have priority over any balancing of free expression rights. This poses a risk to the fundamental rights of huge numbers of users.

The other option is that providers choose not to serve the UK at all.

Choices for Providers

Overall, there are some foreseeable problems with this entire approach. There is significant risk that young people – who could be seventeen – are banned from large swathes of the web. They may well be banned entirely from some platforms and services. Alternatively, large swathes of content will be removed for all users, including adults, due to over-moderation by providers operating under a strict liability regime. Those users, whilst they are given an option to complain, may find it difficult to do so. Providers will have a Hobson’s Choice between age-gating at the site level and blocking children, ensuring they stay on the outside, or sanitise their entire site to child level. If they don’t want to do either of those, they will be required to do age-gating at content level.

The other option is that providers choose not to serve the UK at all.

Risk Assessments 

Online platforms must also complete risk assessments – a task that may be difficult, if not impossible, for many services. In addition, they must report how they will address children of any age and those in age groups judged to be at risk of harm [S.11 (6)].

A risk assessment also must determine the number of children who could encounter primary priority content on the service, and there must be a separate assessment for each type of content. The platform must re-work the risk assessment every time they have a system re-design. The first risk assessment must be carried out within three months of the Bill coming into law, and records must be kept of each one.

All of this must be done within the first six months after the Bill gets Royal Assent.

Jason Kelley

Podcast Episode Rerelease: Securing the Vote

3 weeks 1 day ago

This episode was first published on May 24, 2022. Privacy info. This embed will serve content from

U.S. democracy is at an inflection point, and how we administer and verify our elections is more important than ever. From hanging chads to glitchy touchscreens to partisan disinformation, too many Americans worry that their votes won’t count and that election results aren’t trustworthy. It’s crucial that citizens have well-justified confidence in this pillar of our republic.

Technology can provide answers - but that doesn’t mean moving elections online. As president and CEO of the nonpartisan nonprofit Verified Voting, Pamela Smith helps lead the national fight to balance ballot accessibility with ballot security by advocating for paper trails, audits, and transparency wherever and however Americans cast votes.

On this episode of How to Fix the Internet, Pamela Smith joins EFF’s Cindy Cohn and Danny O’Brien to discuss hope for the future of democracy and the technology and best practices that will get us there.

In this episode you’ll learn about:

  • Why voting online can never be like banking or shopping online
  • What a “risk-limiting audit” is, and why no election should lack it 
  • Whether open-source software could be part of securing our votes
  • Where to find reliable information about how your elections are conducted

Pamela Smith, President & CEO of Verified Voting, plays a national leadership role in safeguarding elections and building working alliances between advocates, election officials, and other stakeholders. Pam joined Verified Voting in 2004, and previously served as President from 2007-2017. She is a member of the National Task Force on Election Crises, a diverse cross-partisan group of more than 50 experts whose mission is to prevent and mitigate election crises by urging critical reforms. She provides information and public testimony on election security issues across the nation, including to Congress. Before her work in elections, she was a nonprofit executive for a Hispanic educational organization working on first language literacy and adult learning, and a small business and marketing consultant.


Hi, this is Cindy Cohn, the executive director of the Electronic Frontier Foundation. With the latest indictments out of Georgia and the 2024 elections coming up on us soon, the security and trustworthiness of our elections are once again top of mind for many people. And that reminded us of a great conversation my former co-host Danny O’Brien and I had in season 3 with Pam Smith, the CEO of Verified Voting. Pam gave us a pathway to respond when we hear about election problems, and also gave us a lot of hope about how we can all be sure that the person who gets the most votes is indeed declared the winner of our elections. So we thought we’d share that episode with you again. I hope you enjoy it.


It's not like banking and shopping, and it's not like banking and shopping online and other things that don't require secrecy and disassociating the identity of the person doing the transaction from the content of the transaction. And that's why internet voting is so challenging. If you were to send in your ballot from remotely and then call the election official and say, "Hey, it's Pam. I sent my ballot, I voted for candidate A, is that what you've got?" That's not how elections work first of all. But if it were, why not just do that and not do the send. Just say, "Hey, I want to vote for candidate A, could you mark that down for me?" That would actually be safer. It wouldn't be private, but neither is internet voting.

That's our guest, Pam Smith. She's the CEO of Verified Voting. And today she's going to be joining us to explain how digital technologies can help secure elections but we are also going to talk about how we need to keep a clear separation between our actual votes and the internet. 

Pam's going to spread some light and tell us how we can protect the entire process, from voter registration to vote verification through to a risk limiting audit. She'll tell us how to build a system that lets everyone feel comfortable that the candidate with the most votes was actually the one chosen.

I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

And I'm Danny O'Brien, special advisor to the EFF. Welcome to How to Fix the Internet, the podcast where we explain some of the biggest problems in tech policy and examine the solutions that'll make our lives better.

Hi, Pam. You and I go way back and I currently serve on the board of advisors of Verified Voting.  And I'm so excited to have you here today so we can dig into these things. 

It's great to be with you again.

So we find ourselves in a very strange situation, you and me and others who care about election integrity, where some of the arguments that we have been using for many, many years to try to make our elections more secure are being picked up and used by people who I would say don't have that same goal. 

Well I think people legitimately want to know that elections are righteous, why wouldn't they? But I think the undermining of the public's ability to trust and to know how to trust in elections is really one of the more severe dangers to democracy today. As long as there have been elections, there have been problems, issues, challenges, and even tampering with elections, that's not new. Those issues are different at different points in history. Starts out with who gets the vote and who doesn't. But also back in the day, communities used hand count votes with the whole public watching. And it was very transparent, it was low tech, no problems, but it was also not private, not secret, and there were very few voters. 

Now elections are carried out with software and computerized systems in most aspects of elections and things can be hacked and tampered with and can have failures and bugs and glitches. People need to understand technology touches their elections in many places. How do we know that it's secure? So what we do is look at what are the basics in securing elections. It’s the same as securing anything computerized, it's keeping systems up and running, it's protecting data from both malfeasance and malfunction, and it's being able to recover when something goes wrong, having that resilience.

Could you give us an example of one of the things that people were very worried about, that election officials could easily explain? 

Well, probably the biggest one, and this was anticipated, was the fact that not all the votes are going to be done being counted on election night, they're just not. And especially in 2020, where you add one more layer of complexity called a pandemic. So it made a lot of things different. When the ballots come in, if they came in before election day, my county prepares them for counting and runs a tally. First thing after the polls are closed, they can report out those absentee ballots. But those are just the ones they've already gotten in, that's not the polling place ballots, that's not the ones we allow to arrive late as long as they were postmarked on time.

So there's many more ballots to be added into that count, that's just the initial count. I think people don't know that the initial count is not the official count, and that's important to know. It takes a while for all of the ballots to be processed and counted, even to make sure that they were legitimate ballots and included properly in the count. And that end part is called certification of the election. When we certify in each jurisdiction, that's the final count.

And this is the difference between elections in the United States in elections in a lot of places around the world, we vote on a lot of things.

It's true.

And we have complicated ballots that might change across the street depending on what precinct or whatever that you're in. Even in a place where people live very close together, there are different kinds of ballots because people are voting for their very local representative as well as all the way up to the federal level. And elections are generally governed as a legal matter locally as well. So the US constitution guarantees your right to vote, but how that happens varies a lot. One of the things that Verified Voting created a long time ago, but which I still think is a tremendously useful tool, is something called The Verifier, which is a website that you can go to and type in where you live and it will tell you exactly what counting technologies are used. 

And I think this touches on the key point here, how technology can complicate or even undermine people's trust in what is already a very complicated system. Again, a lot of the conversations in the last election were about, has this been hacked? And how do we prove whether it has or it hasn't been hacked? I know Verified Voting and EFF were very involved in the early effort to require paper records, a paper trail of digital voting technology, what we call voter verified paper records back in the 2000s. So can you just talk a little bit about where the role paper, of all things, plays in a more high tech voting system?

It's interesting to note when we got started back in 2004, there were only about eight states with a requirement to use paper and only about three had a requirement to check the paper later with an audit.

And when you say paper here, it's literally a printout. You vote and then there's a paper record somewhere that you voted in a certain way.

It's a physical record that you get to check to make sure it was marked the way you intended it.

Got it.

You may be using an interface, a machine that prints that out, but you may be marking a physical ballot by hand as well. And it's that physical record of your intent that is the evidence for the election. 

So here's the thing about paper, you need to know that you can cast an effective ballot and that means you're getting the right ballot, that it's complete, there's no missing candidates or contests on it, it's feasible to mark. If you have to use an interface, that that interface is working, up and running, and that you have a way to check that physical ballot and cast it safely and privately. Then that ballot gets counted along with all the other ballots and you need a way to know it was counted correctly.

And that you can demonstrate that fact to the public to the satisfaction of those who are on the side of the losing candidate or issue, and that's the key. If you have that... This is what was said about the 2020 elections, Chris Kreb, who was at the cybersecurity agency at DHS on elections and he called the 2020 election the most secure in American history. The leg he had to stand on for that was the fact that almost all jurisdictions were using paper, that almost all jurisdictions were doing some audit to check after fact. And that's why it matters, you have to have that record.

I know that some of the work that's come out of what you've been doing then has been this idea of risk limiting audits.  I'm addressing this to both of you, because I know you both worked on this, but the risk limiting audits and how they work.

Audits get done in a variety of industries, there are audits in banking, there's all kinds of audits, the IRS might audit you. It's not always seen as such an attractive word. But in elections, it's really important. What it means is you are counting, you're doing a hand to eye count, you're visually looking at those paper ballots and doing a comparison of a count of a portion of those ballots with the machine count. So software can go wrong, it can be badly programmed, it could have been tampered with. But if you have that physical record that you can then count a portion of and check and make sure it's matching up, and if it's not figure out where the problem is. That's what makes the system resilient.

A risk limiting audit is one that relies on the margin of victory to determine how much you have to count in order to have a strong sense of confidence that you're not seating the wrong person in office. So it's a little bit like polling. If you poll on a particular topic, you want to know how the public feels about something, you don't have to ask every single person, you just ask a percentage of them. You make sure it's a good cross section, you make sure it's a well randomized sample. And all other things being equal, you're going to know how people feel about that topic without having to ask every single person. And with risk limiting audits, it's the same kind of science, it's using a statistical method to determine how much to count.

We worked really hard to try to make sure that there was paper. And then we realized that we had to work really hard to make sure that the paper played the role that it needed to play when there are concerns. If you only do this when you're worried that there's a problem, you're really not fixing the situation. It needs to be something that happens every time so people can build their trust in the things.

But also it needed to be lightweight enough that you could do it every single time and you don't end up with these crazy debacles, like we saw in Arizona.  Can you give us an update? How's it going trying to get risk limiting audits regularized in the law? I know this is an area where you guys do a lot of work.

Well, this extremely geeky term, risk limiting audits, is actually getting wide traction. So it's good news.


People I think are understanding it. And one of the things that we do is support election officials through the process. So maybe their state passes a law that says you'll do risk limiting audits, we help them understand how to do it and answer all the questions that might come up when they're doing it. They then use that to demonstrate to the public that it's working right and it's a tool that they are really adapting to and adopting well. There's more to do. And I think what's important to know is that really any audit is going to have some utility in telling you how your equipment's working. Risk limiting audits are a more robust form of auditing. And they will let you not do as much work if the margin is wide and they will call for more work if the margin is very narrow, but you want that anyway. You might go to a full recount in a very tight margin, talking about Florida 2000, that margin would probably necessitate that full hand recount anyway. But doing a risk limiting audit, you can get to that kind of confidence.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Let me flip to where I love to go, which is so What does it look like if we get this right? What are the values? What does it look like if we have a world in which we have technology in the right places, in our systems, but also that we can trust it.

I think that getting it right means that voters know the election was fair because it was conducted securely. And they know how to know that. That they know where the ground truth is and how to figure it out, that they're participating actively in watching, that they're not being hindered by failed technology at whatever point that intersects with the election. Whether it's registration or checking into the polling place or actually using a device to mark your ballot or the counting process, that nowhere along on the path they're being hindered in that process. And that means more people can participate who want to participate. This doesn't address things like voter suppression, that's a separate different issue. And it's an issue about security because elections really are only secure if everybody who wants to gets to participate and can cast an effective ballot.

Could you explain why we want to fix the internet, we want to make the world better, and why voting over the internet isn't on the list of things that we think would make a better world?

One of the things that we talked about is the importance of the paper. That the voter gets to check at the time they're voting and make sure it represents what they meant to vote for. When you use the internet to transmit votes, you lose that. What arrives at the election's office, if it arrives at the election's office, may or may not represent what the voter thought they intended to vote for. And there's no real way to control for of that right now. Maybe in some future on a different internet that was designed for security and not just for open communication, it's possible to do. But you have all kinds of issues with internet voting that include things like voter authentication attacks, malware on the voter's devices, not just in the election's office, a denial of service attack, server penetration, spoofing, all kinds of things can go wrong.

And ballot privacy is tremendously important if you really want to make sure that people can vote freely for who they want. You don't want them subjected to either their boss or the other people who live with them or their community, being able to see how they vote. That's not a free vote, that can often be a coerced vote. So a secret ballot is just a piece of how elections work, not just in the US but in most places of the world for really good reason.

The internet has other ways in which it's hazardous to elections health. It can be used for attacks on election officials, which we're seeing a lot these days, attacks on votes, attacks on voters’ registration. We saw in 2016 state databases being tampered with from afar. And other kinds of information hacks. Just really by way of disinformation, attacks on democracy and understanding how to know what you need to know. If we're thinking of about what would the world look like if we got it right, election officials are protected, votes are secure, and voter registration is secure and there's ways for people to check and make sure of that. And fail safes in case something happens last minute. So all of those kinds of things are really important. Fighting disinformation is probably as important as the rest.

I thought it was very fascinating in the last couple of elections in the US, I was talking to the cybersecurity side of all of this, it's very difficult to get to the bottom of these things. But one thing really stuck with me, which is that the officials I was talking to said, "Well, look, most people's model of this is someone is hacking to change the results to favor a particular person. But in fact, if you want to introduce instability into a country, the best thing you can do is just undermine faith in the system itself. You don't actually have to achieve a result, you just have to inject a sufficient amount of ambiguity into the result. Because once that trust is gone, then it doesn't matter what the result is because the other side is going to assume something happened behind the scenes. So is part of this to make the whole system transparent in a way that the average person can understand what's going on? 

We don't expect voters to have confidence, our mission has never been make voters feel confident, it's not about that. It's about giving them justified confidence that the outcome was right. And that's different. 

But let's just say I hear that there's a problem in a critical place. What do I ask myself? And what do I look for to be able to tell whether this is a real problem or perhaps not a real problem that's being overblown or just misunderstood?

Well, I think you want to know what the election official says. There are rare exceptions, but nearly all the election officials I know they're simply heroes frankly. They're working with minimal budgets and doing very long hours on very tight deadlines that are unforgiving. But what they do is really to address problems, anticipate problems, avoid them, and if they come up, address them. So you need to know what the election official is saying. If it's observable, go observe. If there's a count happening that you can watch, go watch that count. But you can't get your information, from somebody's cousin on Facebook.

Give us an example of where there was a concern and we were able to put it to rest or there was a concern and it went forward.

One of the things we'd hear on election day at election protection was we'd get a call from somewhere and they'd say, "I've marked my ballot and I wanted to go cast it in this scanner like I usually do. But they told me not to and they put it in a separate bin." Why did they do that? Are they taking those ballots away? Are those not going to be counted? What's happening there? And we are able to tell them that there is actually a legit reason for that.  What happens sometimes in a ballot scanner is that the bin gets full, that the ballots don't fall in a straight line, and it may be jammed. And if it's jammed, you don't want the ballots to get destroyed by trying to keep feeding more and more in. That bin has actually got a name, it's the auxiliary bin, it's the extra bin for when this happens. And it is attached to the ballot box. And what happens is once they clear that jam, which they may not be able to do in the middle of the busiest time of voting, is to feed those through.

All right.

That actually is a real simple problem with a simple resolution. But when you can tell people, "This is how that works" it puts their mind at rest.

Which brings me, I think, to something else that people often, both on the left and right, worry about, which is the companies behind these machines. How can we reassure people that there isn't something being underhand in the very design of the technologies?

We used to say that it shouldn't matter if the devil himself designed your voting system, as long as there's paper and you're doing a robust check on the paper, you should be able to solve for that. That's what makes it resilient and that's why we want to make sure every voter, not just 90% or more, but all of the voters are living in a jurisdiction where that paper record is there for them to check.

I just think overall, this is technology, it needs to be subjected to the same things we do in other technology to try to continue to make it better. And that includes a funding stream so that when new technology is available, local election officials can actually get it.

Elections are woefully underfunded. And there's a conference that happens in California every year that's called New Laws. This is a conference that election officials hold so that they can examine all the new laws that have been passed that affect how they run elections. It happens every year. So they are constantly and continuously having to update what they do and make changes to what they do. Oftentimes there are unfunded mandates that have to do with what they do. Asking them to do additional things is hard, especially if you're not going to pay for it. So it's really important that there is federal funding for elections that gets down through to the states and to the counties to support good technology. But with things like internet voting, the most dangerous form of voting, that doesn't have to go through any certification because no one's been able quite yet to write standards for how you would do this securely.

Because you can't right now.

Because you can't.

With our current internet.

Not that we don't want to, you just can't.

I have one more thing to throw in which people often, often say, "Oh, we should do it like this." I'd love to know your opinion on it because our community is often like, "Well, we need an open source voting machine or a voting system. And that would fix a set of problems." Certainly the idea is that would be more transparent and you would feel more confident about it. Do you think that's an answer or part of the answer?

I think it's a very good thing. It's what some people might call necessary but not sufficient. You still are going to need to do audits, you're still are going to need paper, you still need a resilient system. But open source helps make sure that you can anticipate some of the issues right away because there are lots of eyes on the problem. With voting technology though, it gets tricky. It's not quite the same as other kinds of open source because who's responsible for what's the most current iteration? This isn't something that people can just keep applying fixes to randomly, there has to be a known version that's being used in a particular election. So there has to be an organization or entity that governs how that's being used.

Understanding how this technology works is tremendously important for all of our security. And it's the classic security through obscurity doesn't work, that our friend, Adam Savage just reminded us of this. This is a whole other wing of secure elections, but the only way you know something is secure is that a bunch of smart people have tried to break it and they can't. 

Don't leave weak spots if you can help it because if somebody's looking to tamper, they're going to find the weakest point. So it really is crucial to try and secure all parts of our elections. 

What's the endgame here? You're clearly deeply in the trenches trying to incrementally improve these systems. But do you ever have a dream where you envisage a world where maybe we do have a solution to voting on the internet or we do use a new technology to make things better?

Moving towards those options includes things like if you need to vote by mail, you can vote by mail. If you want to vote in person in a polling place, that's available to you. If you need an accessible device, one that's really, really accessible and usable, it's available to you. And it works and it was set up before you got there so it's readily available. I think knowing that every jurisdiction is using a system that's resilient to any kind of failure, hurricane, power outage, anything, that there's a physical ballot to mark, that it's easy to check, it's a usable ballot not confusing, so that you end up missing contest or anything like that. It's designed well, ballot design is really important. All of those small pieces are only possible if there's enough funding for elections. If we believe in our democracy and we believe in having good elections, then that means having good voting systems, good practices, and the resources to carry those out.

Right now, election officials really struggle to recruit enough poll workers for every election. Of course, that got a little harder with the pandemic going on. Many poll workers are of an older age cohort, so we need younger poll workers. And a lot of really smart programs have led to recruiting high school students to be poll workers and it's been magical. So I think really getting everyone engaged, getting everyone to understand where they can find the ground truth about elections, and feeling the confidence that they need to really happily participate and celebrate being part of this democracy, that's the most important thing. And that's what I envision for our future.

Thank you so much for taking the time to talk to us. This has been a fascinating conversation. There's so much talk about elections and election integrity right now. And it's great to have a sane, stable voice that's been here for a long time, which is you and Verified Voting on the case. So thanks.

Thank you, Cindy. And thank you, Danny. Thanks for doing this.

It's always good to talk to somebody like Pam, who has years of experience, especially when a topic is suddenly as controversial or in the public eye as election integrity. I did think given how controversial it is these days, Pam was reassuringly genial. She established that we need to get to a ground truth that everyone can agree on and we need to find ways, technological or not, to reassure voters that whatever the result, the rules were followed.

I especially appreciated the conversation about risk limiting audits as one of the tools that help us feel assured that the right person won the election, the right issue won the election. Especially that these need to be regularized. EFF is audited, lots of organizations are audited. That this is just somewhat built into the way we do elections so that the trust comes from the idea that we're not doing anything special here, we always do audits and we scale them up depending on how close the election is. And that's just one of the pieces of building trust that I think Verified Voting has really spearheaded.

The other thing I really liked was the ways that she helped us think about what we need to do when we hear those rumors of terrible things happening in elections far away. I appreciated that you start with the people who are there. Look for the election officials and the organizations who are actually on the ground in the place where you hear the rumors about looking to them first, but also looking to the election protection orgs, of which Verified Voting is one but not nearly the only one, that are really working year round and working in a nonpartisan way around election integrity.

And another leg of the stool is transparency throughout all of this process. It's key for resolving the ambiguity of it. I do appreciate that she pointed out that while open source code is great for giving some element of transparency, it's necessary but not sufficient. You have to wrap it around a trusted system. You can't just solve this by waving the free software license wand all over it.

I also appreciate Pam lifting up the two sides of thinking about the Internet's involvement in our elections. First of all, the things that it's good at, delivering information, making sure ballots get to people. But also what it's not good for, which is actual voting and the fact that we can't get ground truth in internet voting right now. And that part of the reason we can't and what makes this different than doing your banking online is the need for ballot secrecy that has a tremendously long and important role in our elections.

But that said, I do think that ultimately there was a positive thread going through all of this. That many things in this area, in the United States have got better. We have better machines, we have newer machines, we have less secrecy and proprietary barriers around those machines. Often people when we ask them about what their vision of the future is, they get a little bit thrown because it is hard to describe the positive side. But Pam was pretty specific but also pointed out perhaps why it's such a challenge. Because she highlighted that what we want in our future is a diversity of solutions. And of course, that you need the correct financial and social support in the rest of society to make that vision happen.

Thanks so much to Pam Smith for joining us and giving us so much honestly hope for the future of our democracy and our voting systems.

If you like what you heard, follow us on your favorite podcast player and check out our back catalog for more conversations on how to fix the internet. Music for the show was created for us by Reed Mathis and Nat Keefe of BeatMower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed under Creative Commons Attribution 3.0 unported licensed by their creators. You can find those creators names and links to the music in our episode notes or on our website at How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. I'm Danny O'Brien.

And I'm Cindy Cohn. Thank you so much for joining us.

Josh Richman

ISPs Should Not Police Online Speech—No Matter How Awful It Is.

3 weeks 2 days ago

Entrusting our speech to multiple different corporate actors is always risky. Yet given how most of the internet is currently structured, our online expression largely depends on a set of private companies ranging from our direct Internet service providers and platforms, to upstream ISPs (sometimes called Tier 2 and 3), all the way up to Tier 1 ISPs (or the Internet backbone) that have no direct relationships with most users.

Tier 1 ISPs play a unique role in the internet “stack,” because numerous other service providers depend on Tier 1 companies to serve their customers. As a result, Tier 1 providers can be especially powerful chokepoints—given their reach, their content policies can affect large swaths of the web. At the same time given their distant relationship to speakers, Tier 1 ISPs have little if any context to make good decisions about their speech.

At EFF, we have long represented and assisted people from around the world—and across various political spectrums—facing censorship. That experience tells us that one of the most dangerous types of censorship happens at the site of a unique imbalance of power in the structures of the internet: when an internet service is both necessary for the web to function and simultaneously has no meaningful alternatives. That’s why EFF has long argued that we must “protect the stack” by saying no to infrastructure providers policing internet content. We’ve warned that endorsing censorship in one context can (and does) come back to bite us all when, inevitably, that same approach is used in another context. Pressure on basic infrastructure, as a tactic, will be re-used, inevitably, against unjustly marginalized speakers and forums. It already is.  

So we were concerned when we started hearing from multiple sources that Hurricane Electric, a Tier 1 ISP, is interfering with traffic. Confirmation of the details has been difficult, in part because Hurricane itself has refused to respond to our queries, but it appears that the company is partially denying service to a direct customer, a provider called Crunchbits, in order to disrupt traffic to a site that is several steps away in the stack. And it is justifying that action because activity on the site reportedly violates Hurricane’s “acceptable use policy”—even though Hurricane has no direct relationship with that site. Hurricane argues that the policy requires its direct customers to police their customers as well as themselves.

If the site in question were Reddit, or Planned Parenthood, or even EFF, the internet would be up in arms. It is not, and it’s not hard to see why. The affected site is an almost universally despised forum for hateful speech and planning vicious attacks on vulnerable people: Kiwi Farms. For many, the natural response is to declare good riddance to bad rubbish—and understandably so.

At EFF, our mission and history requires us to look at the bigger picture, and sound the alarm about the risks even when the facts are horrific.

That means we need to say it even if it’s not comfortable: Hurricane Electric is wrong here.  It gives us no joy to call Hurricane on this, not least because many will perceive it as an implicit defense of the KF site. It is not. A site that provides a forum for gamifying abuse and doxxing, whose users have celebrated on its pages the IRL deaths of the targets of their harassment campaigns, deserves no sympathy.  We fully support criminal and civil liability for those who abuse and harass others.

But just because there’s a serious problem doesn’t mean that every response is a good one.  And regardless of good intentions, Hurricane’s role as a Tier 1 ISP means that their interference is a dangerous step. Let us explain why.

For one thing, Tier 1 ISPs like Hurricane are often monopolies or near-monopolies, so users have few alternatives if they are blocked. Censorship is more powerful if you don’t have somewhere else to go.  To be clear, at time of writing, there are two mirrored instances of KF online: one on the clear web at a country code top-level domain, and the other an onion service on the Tor network. So right now this isn’t a “lights out” situation for KF, and generally the Tor network will prevent that from happening entirely. The so-called “dark web” has plenty of deserved ill repute, however, so although it is resistant to censorship by Tier 1 ISPs, it is not a meaningful option for many, much less an accessible one.

Which brings us to the second point: this approach is usually a one-way ratchet. Once an ISP indicates it’s willing to police content by blocking traffic, more pressure from other quarters will follow, and they won’t all share your views or values. For example, an ISP, under pressure from the attorney general of a state that bans abortions, might decide to interfere with traffic to a site that raises money to help people get abortions, or provides information about self-managed abortions. Having set a precedent in one context, it is very difficult for an ISP to deny it in another, especially when even considering the request takes skill and nuance.  We all know how lousy big user-facing platforms like Facebook are at content moderation—and that’s with significant resources. Tier 1 ISPs don’t have the ability or the incentive to build content evaluation teams that are even as effective as those of the giant platforms who know far more about their end users and yet still engage in harmful censorship. ISPs like Hurricane Electric are bound to be far worse than Facebook and its peers at sorting bad content from good, which is why they should not open this door.

Finally, site-blocking, whatever form it takes, almost inevitably cuts off legal as well as illegal speech. It cuts with a chain saw rather than a scalpel.

We know that many believe that KF is uniquely awful, so that it is justifiable to take measures against them that we wouldn’t condone against anyone else. The thing is, that argument doesn’t square with reality, online or offline.  Crossing the line to Tier 1 blocking won’t just happen once.

To put it even more simply: When a person uses a room in a house to engage in illegal or just terrible activity, we don’t call on the electric company to cut off the light and heat to the entire house, or the post office to stop delivering mail. We know that this will backfire in the long run. Instead, we go after the bad guys themselves and hold them accountable.

That’s what must happen here. The cops and the courts should be working to protect the victims of KF and go after the perpetrators with every legal tool at their disposal. We should be giving them the resources and societal mandate to do so. Solid enforcement of existing laws is something that has been sorely lacking for harassment and abuse online, and it’s one of the reasons people turn to censorship strategies. Finally, we should enact strong data privacy laws that target, among others, the data brokers whose services help enable doxxing.

In the meantime, Tier 1 ISPs like Hurricane should resist the temptation to step in where law enforcement and legislators have failed. The firmest, most consistent approach infrastructure chokepoints like ISPs can take is to simply refuse to be chokepoints at all. Ultimately, that’s also the best way to safeguard human rights. We do not need more corporate speech police, however well-meaning.

Electronic Frontier Foundation

Apple, Long a Critic of Right to Repair, Comes Out in Support of California Bill

3 weeks 3 days ago

Apple has announced a surprising stance in support of California’s Right to Repair Act (S.B. 244). This is a sign that the public’s strong support of the right to repair has forced Apple to change its position, and now is the time for you to help keep the pressure on lawmakers to get the Right to Repair Act passed in California.

Apple’s about-face came in a letter to the bill's sponsor, Senator Susan Eggman. Apple's letter marks a significant change from where Apple was on the issue in the past, when reporting in 2017 showed that lobbyists associated with Apple (and other tech companies) fought against the "Fair Repair Act" in New York, and again against the "Digital Fair Repair" Act in 2022. In a letter to New York Governor Hochul, Apple flat out denied the benefits of the bill for consumer choice, safety, and protection of the environment, while raising the specter of dire consequences if others were allowed to compete with them in the repair market. 


Support the "Right to Repair" Act

But in a major shift in policy, Apple says in its letter that it supports the California bill as it stands, as long as it still includes a requirement for repair shops to disclose the use of third party or used parts, and doesn't allow those shops to turn off certain remote locks. Apple has made small concessions to the repair movement with moves like its 2021 launch of its Self Service Repair program, which allows you to order repair parts directly from Apple, but direct support for a bill is a major change for the company.

S.B. 244 raises the bar from right to repair laws recently passed in Minnesota and New York. If passed, S.B. 244 goes further than previous laws by setting a time span requiring manufacturers to make repair parts, manuals, and diagnostic tools available to everyone for three years after the last date of manufacture for products between $50 and $99.99, and seven years for products over $100. It also allows a city, county, or state to bring a case in superior court, as opposed to other laws that can only be enforced by the state attorney general.

This week, supporters assembled a pile of nearly 250 pounds of e-waste—244 pounds, specifically, in honor of the bill—to show how much e-waste is generated every five seconds.

S.B. 244 is not perfect. It doesn't cover cars, farm equipment, medical devices, industrial equipment, or video game consoles, and there are good reasons why independent repair shops need to be able to work with devices’ security systems. If it does pass, there will still be work to do in the future.

The right to repair is an important part of the rights that a device owner should be able to exercise. In addition to repair, we have long fought for the rights of security researchers, consumer protection groups, and other device owners to be able to understand, control, and improve upon the technology they rely upon every day.

Apple's support is a big deal, but the fight is not over. If you're a Californian, you can help! The bill is before the Assembly Appropriations committee, which is its last hurdle before heading to the floor and, hopefully, to the governor's desk for a signature. Please take action to tell your Assemblymember that you support the "Right to Repair" Act today. 


Support the "Right to Repair" Act

Thorin Klosowski

The Protecting Kids on Social Media Act is A Terrible Alternative to KOSA

3 weeks 3 days ago

A new bill sponsored by Sen. Schatz (D-HI), Sen. Cotton (R-AR), Sen. Murphy (D-CT), and Sen. Britt (R-AL) would combine some of the worst elements of various social media bills aimed at “protecting the children” into a single law. It contains elements of the dangerous Kids Online Safety Act as well as several ideas pulled from state bills that have passed this year, such as Utah’s surveillance-heavy Social Media Regulations law. The authors of the Protecting Kids on Social Media Act (S.1291) may have good intentions. But ultimately, this legislation would lead to a second-class online experience for young people, mandated privacy-invasive age verification for all users, and in all likelihood, the creation of digital IDs for all U.S. citizens and residents. 

The Protecting Kids on Social Media Act has five major components: 

  • Mandate that social media companies verify the ages of all account holders, including adults 
  • Ban on children under age 13 using social media at all
  • Mandate that social media companies obtain parent or guardian consent before minors over 12 years old and under 18 years old may use social media
  • Ban on the data of minors (anyone over 12 years old and under 18 years old) being used to inform a social media platform’s content recommendation algorithm
  • Creation of a digital ID pilot program, instituted by the Department of Commerce, for citizens and legal residents, to verify ages and parent/guardian-minor relationships
All Age Verification Systems are Dangerous — Especially Governments’

The bill would make it illegal for anyone under 13 to join a social media platform, and require parental consent for anyone between the ages of 13 and 18 to do so. Thus the bill also requires platforms to develop systems to verify the ages of all users, as well as determine the parental or guardian status for minors. 

The problems inherent in age verification systems are well known. All age verification systems are identity verification systems and surveillance systems. All age verification systems also impact all users because it’s necessary to confirm the age of all people in order to keep out one select age group. This means that every social media user would be subjected to potentially privacy-invasive identity verification if they want to use social media.

Anyone age 13 to just under 18 will be required to obtain parental consent before accessing social media. We are against such laws

As we’ve written before, research has shown that no age verification method is sufficiently reliable, covers the entire population, and protects data privacy and security. In short, every current age verification method has significant flaws. Just to point out a few of the methods and their problems: systems that require users to upload their government identification only work for people who have IDs; systems that use photo or video to guess the age of a person are inevitably inaccurate for some portion of the population; and systems that rely on third-party data, like credit agencies, have all of the problems that this third-party data often has, such as incorrect information. And of course, all systems could tie a user’s identity to the content that they wish to view. 

An Age Verification Digital ID “Pilot Program” is a Slippery Slope Towards a National Digital ID 

The bill’s authors may hope to bypass some of these age verification flaws by building a government-issued digital ID system for the (voluntary) use by all citizens and lawful residents of the U.S. to be able to verify their ages and parent/guardian-minor relationships on social media platforms (although this “pilot program” would likely not be completed before the age verification requirements went into effect). But this program risks falling down a slippery slope toward a national digital ID for all purposes. 

Under the bill, individuals would have to upload copies of government-issued and other forms of identification, or people’s asserted identities and ages would be cross-referenced with electronic records from state DMVs, the Internal Revenue Service, the Social Security Administration, state agencies responsible for vital records, “or other governmental or professional records that the Secretary [of Commerce] determines are able to reliably assist in the verification of identity information.” 

EFF and other civil liberties organizations have long been critical of digital ID systems and policies that would move us toward them. While private, commercial age verification systems come with particular concerns, government versions that rely on digital IDs are also dangerous. 

Data sharing concerns don’t disappear because the government is involved

Mission creep is a serious concern. The intention of this ID system may only be to authorize social media access; the bill states that the pilot program credential “may not be used to establish eligibility for any government benefit or legal status.” But it’s unlikely that age and parental status verification would be its only use after its creation. Congress could easily change the law with future bills. Just look at the Social Security Number–once upon a time, it was only meant to allow Americans to participate in the federal retirement program. Even the Social Security Administration admits that the number “has come to be used as a nearly universal identifier.” Online government identity verification for accessing social media is already dystopian; who knows where the system would end up after it’s in place. Without very careful and thoughtful management and architecture, a digital ID system could lead to loss of privacy, loss of anonymous speech, and increased government surveillance. 



Data sharing concerns also don’t disappear because the government is involved—in fact, they may be more acute. In third-party age verification systems, a private company generally acts as a middle-man between a government and the requesting site or platform. In fact, the bill contemplates the use of “private identity verification technology providers" as part of the pilot program. The third party may collect a user’s documentation and compare that to a government database, or compare a user’s biometric information with government records. This creates the opportunity, without more protection via regulation or other means, for the third party to collect an individual’s personal data and use it for their own commercial purposes, including by selling the data or sharing it with others. The data is also at risk of being accessed by unknown and innumerable nefarious individuals and entities through a data breach.

Additionally, current and past practices of government data sharing should make anyone leery of uploading their private information to the government as well, even to an agency that theoretically already has it. All age verification systems are surveillance systems as much as they are identity verification systems. Government agencies sharing data with one another is already a danger—as of 2020, the FBI could search or request data from driver’s license and ID databases in at least 27 states. The total number of DMVs with facial recognition at the time was at least 43, with only four of those limiting data sharing entirely. That puts two-thirds of the population of the U.S. at risk of misidentification. 

From a practical perspective, it’s unclear how effective or accurate such a system would be: it may sound simple to compare a person’s uploaded record with one that’s on file, but people without IDs, those whose names have changed, and anyone who has ever experienced a snafu in government document processing know better. As an example, in 2022, the IRS backed away from a decision to use a third-party identity verification system——specifically because it forced people to use flawed facial recognition and endure four-hour waits to be verified. 

Parental Consent for Older Minors Is the Wrong Approach to Safety Online

Under this law, anyone age 13 to just under 18 will be required to obtain parental consent before accessing social media. We are against such laws

First, requiring parental consent for teens’ use of these platforms would infringe on teens’ free speech, access to information, and autonomy—which also must include, for older teens, privacy vis-à-vis their parents. The Supreme Court has repeatedly recognized that young people enjoy First Amendment protections for expressing themselves and accessing information. The Court has stated, for example, that speech generally “cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” 

The world envisioned by the authors of this bill is one where everyone has less power to speak out and access information online

Access to private spaces online for research, communication, and community are vitally important for young people. Many young people, unfortunately, encounter hostility from their parents to a variety of content—such as information about sexual health, gender, or sexual identity. (Research has shown that a large majority of young people have used the internet for health-related research.) The law would endanger that access to information for teenagers until they are 18. 

Also, it is unfortunate but true that some parents do not always have their childrens’ best interest in mind, or are unable to make appropriate decisions for them. Those young people—some of whom are old enough to work a full-time job, drive a car, and apply to college entirely on their own—will not be able to use some of the largest and most popular online websites without parental consent. It goes without saying that those most harmed by this law will be those who see social media as a lifeline—those with fewer resources to begin with.

Second, Congress should not remove parents’ ability to decide for themselves what they will allow their child to access online, the vast majority of which is legal speech, by assuming that parents don’t want their children to use social media without parental consent. Parents should be allowed to make that decision without governmental interference, by using already available filtering tools. 

Worse, not only would minors between 13 and 18 be required to gain parental consent, but under this law anyone below the age of 13 would be banned from social media entirely—even if their parents approve. This outright ban is a massive overreach that goes far beyond current laws like COPPA, which prohibits social media and other online companies from collecting data for commercial purposes from children under age 13 without parental consent. Under this law, children would be banned even from social media platforms that are designed specifically for kids—again, whether parents approve of its use or not.

Third, verification mechanisms will invariably stumble when dealing with a variety of non-traditional families. It’s unclear how age verification and parent/guardian consent will function for children with different last names than a parent, those in foster care, and those whose guardians are other relatives. Children who, unfortunately, don’t have an obvious caregiver to act as a parent in the first place will likely be forced off these important spaces entirely. Though it’s not explicit in the bill, if a person violates the law by misrepresenting their identity—say, if you’re a minor pretending to be a parent because you don’t have an obvious caregiver—you could be charged with a federal criminal offense, a charge that is otherwise rare against children. The end result of these complex requirements are is that a huge number of young people—particularly the most vulnerable—would likely lose access to social media platforms, which can play a critical role for young people in accessing resources and support in a wide variety of circumstances.

The Protecting Kids on Social Media Act is a Bad Alternative

While this bill is technically an alternative to the Kids Online Safety Act, it is a bad one. As we’ve said before, no one should have to hand over their driver’s license just to access free websites. Having to hand over that driver’s license to a government program doesn’t solve the problem. The world envisioned by the authors of this bill is one where everyone has less power to speak out and access information online, and we must oppose it.  



Jason Kelley

Tornado Cash Civil Decision Limits the Reach of the Treasury Department’s Actions while Skirting a Full First Amendment Analysis

3 weeks 6 days ago

A District Court recently considered a civil claim that the Treasury Department overstepped when it listed Tornado Cash on the U.S. sanctions list. This claim took some steps, if not enough, to address EFF’s concerns about coders rights.  

In the case, Van Loon v Department of the Treasury, EFF argued in an amicus brief that the government needed to do more to ensure that coders’ First Amendment rights were protected when it took the unprecedented step of placing an open-source project on the Specially Designated Nationals (SDN) sanctions list.  That led Github to temporarily take down the project and essentially halt all additional work on it.  While the government later clarified in an FAQ— issued after EFF and others had publicly complained— that it did not intend to prohibit “discussing, teaching about or including open source code in publications,” we argued that this didn’t go far enough to protect coders. We urged the Court to require the Treasury Department to follow the strict limits of the First Amendment and to be more clear and careful in its actions.

The District Court did not agree with us that the government overstepped the First Amendment here, and dismissed the case overall.  But, in interpreting the government’s actions, it did make even more clear that the scope of the sanction did not include coders developing the code. The Court said:   

Similarly, amicus curiae Electronic Frontier Foundation argues that OFAC’s designation has had a chilling effect on certain code developers. However, OFAC’s designation blocks only transactions in property in which Tornado Cash holds an interest, such as the smart contracts. It does not restrict interaction with the open-source code unless these interactions amount to a transaction.  . . .  Developers may, for example, lawfully analyze the code and use it to teach cryptocurrency concepts. They simply cannot execute it and use it to conduct cryptocurrency transactions.

While we are disappointed that the Court did not conduct a full First Amendment analysis and directly require the Treasury Department to take more care, both here and in any future situations where open source projects interact with federal sanctions laws, the Court’s analysis should give anxious coders some relief.  The Court clearly draws a sharp line between actually using the code to conduct transactions and the role of coders in developing and using the code outside of actual transactions. EFF will continue to monitor this case and others where coders are put at risk.

In other Tornado Cash news, the government issued indictments against two of the key developers of Tornado Cash recently for money laundering.  This is another area where we will be watching closely to ensure that any prosecution is not based upon merely coders developing tools, but instead are targeted at actual illegal activity and transactions.  

Cindy Cohn

Fourth Circuit Decision in Marriott Data Breach Case Kicks the Can Down the Road

4 weeks ago

When a company that collected your personal data negligently fails to secure it, you should have accountability and relief—including standing to sue. 

EFF and our friends at Electronic Privacy Information Center filed an amicus brief in late November pointing this out to the U.S. Court of Appeals for the Fourth Circuit in a case arising from the 130 million consumer records stolen from Marriott in 2018.  We detailed the science and evidence demonstrating that people impacted by such data breaches run the risk of identity theft, ransomware attacks and increased spam, along with corresponding increased anxiety, depression and other psychological injuries. 

The Fourth Circuit’s decision last week didn’t address our arguments; instead it just kicked the can down the road. The appeals court found that the trial court had not properly considered whether consumers had waived their rights to bring a class action by joining Marriott’s loyalty programs— those programs that advertise huge benefits to loyal customers but put the costs you pay (like decreased ability to sue) into the fine print that no one reads. 

We strongly disagree with the suggestion that any Marriott customer meaningfully agreed to waive a class action here. Few if any customers read a hotel loyalty program’s fine-print terms and conditions, much less knowingly waive their right to bring a class action if the company negligently lets their data fall into the hands of thieves. We hope that on remand, the trial court will reject Marriott’s poorly-taken waiver argument, and we can get back to trying to ensure that consumers have real accountability when companies fail to protect the data they increasingly extract from us.  

This decision highlights one of EFF’s criticisms of the proposed American Data Privacy and Protection Act last year. One of the reasons we did not support the bill was that it failed to override bogus waivers such as this.  Privacy laws need to be strong and not full of holes that leave us without protection because of a single click or some tiny fine print that no one reads. We need a strong data privacy law that prohibits waivers and mandatory arbitration requirements letting companies sidestep users’ basic legal rights.  

We’ll keep watching this important case and standing up for your rights both in the courts and in Congress.  

Cindy Cohn

Proposed UN Cybercrime Treaty Threatens to be an Expansive Global Surveillance Pact

4 weeks 2 days ago

This is Part V in EFF’s ongoing series about the proposed UN Cybercrime Convention. Read Part I for a quick snapshot of the ins and outs of the zero draft; Part II for a deep dive on Chapter IV dealing with domestic surveillance powers; Part III for a deep dive on Chapter V regarding international cooperation: the historical context, the zero draft's approach, scope of cooperation, and protection of personal data, and Part IV, which deals with the criminalization of security research.

In the heart of New York City, a watershed moment for protecting users against unfettered government surveillance is unfolding at the sixth session of negotiations to formulate the UN Cybercrime Convention. Delegates from Member States have convened at UN Headquarters for talks this week and next that will shape the digital and fair trial rights of billions. EFF and our allies will be actively engaged throughout the talks, participating in lobbying efforts and delivering presentations. Despite repeated civil society objections, the zero draft of the Convention is looking less like a cybercrime treaty and more like an expansive global surveillance pact.

Over the next 10 days, more than 145 representatives of Member States of the United Nations will invest 60 hours in deliberations, aiming for consensus on most provisions. Focused parallel meetings, coined “informals,” will tackle the most contentious issues. These meetings are often closed to civil society and other multi-stakeholders, sidestepping important input from human and digital rights defenders about crucial interpretations of the draft treaty text. The outcome of these discussions could potentially shape the most controversial treaty powers and definitions, underscoring the urgency for multi-stakeholder observation. It is critical that states allow external observers to participate in these informals over the next two weeks.

The following articles in the zero draft, released in June, are the focus of our main concerns about Chapter V,  which deals with cross border surveillance and the extent to which Member States must assist each other and collaborate in surveillance on each other's behalf. We will also deal with other articles (24 and 17) in the proposed treaty as they are relevant to the international cooperation on surveillance chapter.

Article 24: Conditions and safeguards should be consistently applied throughout the international cooperation chapter. An earlier draft recognized the importance of conditions and safeguards across both criminal procedural measures and international cooperation chapters.  While Article 24, which requires human rights safeguards such as respect for the principle of proportionality and the need for judicial review, could be bolstered, it's an important provision. But the zero draft curiously restricts the scope of Article 24 to just criminal procedural measures, meaning that international cooperation is not subject to its important conditions and safeguards at all. Article 36, which deals with protection of personal data, imposes some additional restrictions on the processing of personal data, but does not include these central requirements.

This is particularly problematic when States’ existing domestic laws and practices are inconsistent with international human rights law, as is too often the case. Given the sensitivity of international cooperation for surveillance and the looming risks of human rights abuses, this lack of safeguards is perplexing. It's rather ironic: States are bound to uphold Article 24 at home, yet there's hesitancy to ensure the same minimum level of protections in international collaborations. Surely, for full cooperation among Member States, robust minimum safeguards in the cross border spying chapter should be non-negotiable.

Article 2: Definitions matter. States must prioritize clarity in their definitions. The zero draft of the convention uses broad terms for the kinds of information States can disclose or field requests for on each other's behalf. Though Article 2 “Definitions” elaborate on some categories like “traffic data,” “content,” “subscriber information,” and “personal data,” it is silent on key terms such as “data” and “information"—potentially paving the way for misuse of sensitive data or indiscriminate access to massive databases (see Article 19’s analysis).[1] Without explicit clarification, there's room for interpretation that these terms might include personal data, and lead to using sensitive personal data without safeguards.

For the sake of clarity and to preclude post-treaty disputes, it's imperative the draft convention stop using broad language when referring to “data” or “information” in its provisions. It needs unambiguous definitions for the key terms it uses, and should ensure that it does not authorize any processing of personal data masked as “information” or “data” without adequate safeguards.

Interestingly, while “personal data” has a clear definition in Article 2, the international policy arena still grapples with the categorization and protection levels for inferences drawn from personal data. The ambiguity looms: could "data'' or "information" potentially cover inferences arising from biometric sources, traffic data patterns, or even direct content from communications held in large databases? As such, what is the level of protection of such data?

These intricate nuances haven't made their way to the plenary discussions. And it’s not clear whether such pivotal dialogues will take place behind closed doors and sidestep public scrutiny or simply be pushed and understood by a few States, leaving others in the blind. The onus is on the drafters to inject clarity into these definitions. Whether by error or design, bringing these terms into the public discourse for a comprehensive definition is not only timely but also a matter of urgency.

Articles 35: Still no clear consensus over the scope of the international cooperation chapter. The zero draft appears primarily focused on cybercrimes outlined in Articles 6-16. Yet, Article 35 broadens the scope of international cooperation to encompass electronic evidence relating to any current or future serious crime. To ensure this Convention remains focused on investigations of cybercrimes and does not become a vehicle for investigating any and all offenses, Article 35 should be limited to global cooperation on offenses set out in Articles 6-16.

Article 35: Mandatory dual criminality must be the rule for cross-border cooperation. Article 35 treats the principle of dual criminality—where an offense must be a crime in both cooperating nations before investigative cooperation is required—as optional. This principle is vital to safeguard freedoms and ensure countries are not compelled to carry out intrusive investigations of activities that are not even crimes in their jurisdiction. The zero draft should make dual criminality mandatory to uphold international human rights standards.

Article 35: Authorizing bulk cross-border surveillance? Because Article 35 is not limited to “specific” investigations or proceedings, it also opens the door to indiscriminate or bulk information sharing for data-mining purposes. Limiting Article 35 to “specific” investigations[2] would ensure that police powers are used only in individual cases concerning particular suspects rather than authorizing generalized information sharing.

Article 36(1): Protection of personal data. Article 36(1) of the zero draft details conditions for international data transfers. Its current wording is ambiguous, suggesting compliance with "applicable international law." This should be explicitly refined to "international human rights law" to emphasize the importance of human rights-based data protection standards. Protection of personal data is a human right and it is sometimes wrongly addressed more permissively in trade law. In addition, together with Privacy International, we called for this article to be further amended to ensure that the principles of lawful and fair processing, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability be included. Such data protection principles should be aligned with existing international human rights standards.

Article 40(1): Narrow the scope of mutual legal assistance (MLA). Article 40(1) of the zero draft clearly encompasses core cyber crimes defined in Articles 6-16 of the treaty. However, it extends beyond that to all serious crimes—defined as one punishable by a maximum deprivation of liberty of at least four years.While Article 40(1) calls on State Parties to afford one another “the widest measure of assistance,” it's imperative that the extent of this cooperation not include mandated cooperation with investigations of draconian domestic provisions

Articles 40(3) and (4): Overarching power in MLA. Article 40(3) permits States to engage in a series of intrusive surveillance actions for one another, including real-time traffic data collection, content data interception, and actions related to “data” in a computer system, which includes searching, seizing, and disclosure.  Even if information is defined to exclude, say, personal data, requests for mutual legal assistance under Article 40(3)(h) may encompass purposes such as providing information, "evidentiary items," (which is at least as intrusive as "information" no matter how you define the latter).

Moreover, Article 40(4) allows a State's competent authority to proactively share information linked to criminal matters without an initial request if it's perceived beneficial for another state's criminal processes. Such “information” (which is still undefined) likely includes personal data and could encompass outputs from wiretaps, device scans, or even indirect investigative leads. Nothing in the provision limits the scope or specifies targets for the information shared. The breadth of what can be shared, merely on the grounds of its potential utility to another country's law enforcement, demands stringent guidelines and checks to ensure the power isn't abused or misused.

Article 40(6) and (7): Why the MLA principles could be appealing for some States. While Article 40(6) respects existing MLAT treaty  obligations,  States can opt for this draft treaty over an existing one per Article 40(7). This is concerning, as the provisions of this Convention provide fewer safeguards than many MLAs. Where States have explicitly chosen not to enter into MLA treaties with each other, Article 40(7) imposes the mechanisms set out in this convention, effectively forcing these states to cooperate.

Article 40(8): Dual criminality in MLA should be mandatory. While Article 40(8) grants states the ability to decline assistance based on the absence of dual criminality, it simultaneously offers unchecked discretionary power for States to provide assistance, even if the activity in question isn't criminal within their own jurisdiction. This latitude not only undermines the dual criminality tenet but also risks States succumbing to external pressures, potentially assisting in investigations that conflict with their own legal and ethical standards. To ensure genuine international cooperation, the article should unequivocally mandate dual criminality and reduce subjective discretion in the provision of assistance.

Article 40(21): Grounds for refusal of MLAs should be strengthened. Article 40(21) of the treaty prudently allows states to decline MLA  under specific circumstances: if the request does not conform to the article's guidelines, if there's a potential compromise to the requested state's sovereignty, security, public order, or other primary interests, if domestic prohibitions hinder the execution of the requested action for similar offenses, or if accepting the request goes against the state's MLA principles. While these provisions are a step in the right direction, we strongly advocate for Article 40 to further empower states with the discretion to refuse assistance in cases involving "a political offense or an offense connected with a political offense." Additionally, refusal should be mandatory where executing the request might adversely affect "the protection of human rights or fundamental freedoms.

Article 41: Overreach and ambiguity of the expanded scope of the 24*7 network. Article 41's 24/7 network, aimed at providing “immediate assistance” of core cybercrimes outlined in  Articles 6 to 16, casts too wide a net by also allowing collection of evidence for any serious crime. This expansive scope raises concerns about States’ adherence to the dual criminality principle. Moreover, by broadly allowing for the collection and sharing of electronic evidence across a wide range of offenses, Article 41 bypasses the safeguards and procedures specified in Article 24 and the safeguards in Article 40, general MLA principles.

This could lead to situations where the central authority's designated oversight and control are compromised or completely bypassed. Where central authorities are relied upon to process expedited requests, this could overburden State Parties' resources, heightening the risk of misuse. Finally, for the treaty’s efficacy and the confidence of State Parties, it's essential to refine the article’s scope by delineating the duties of the designated contact more clearly to limit it to providing technical advice, assisting in identifying potential offenses, facilitating swift responses to ongoing crimes, and bolstering encryption and authentication measures to ward off potential threats. The 24/7 network should not deal with the collection, preservation, or sharing of evidence, or any personal data, since such exchange should be in accordance with Article 40 and be subject to the safeguards in Articles 24.

Articles 45-46: Remove collection of traffic data and interception of content in MLA. We called for the deletion of  Articles 45 and 46, on real-time traffic data and interception of content. These are some of the most intrusive surveillance powers and it’s especially troubling to make them available to a foreign government on demand without mandating equally extensive robust safeguards. We’ve called for similar powers to be removed from the domestic spying chapter (Articles 29 and 30) unless significant safeguards are applied (including prior judicial authorization, specificity, time limits, and proportionality, including particularly, transparency, oversight and an effective redress). We similarly think these powers should not be made available in response to foreign government requests without comparably strong safeguards. 

Article 47: Lawless law enforcement cooperation. Article 47(1) ostensibly emphasizes close cooperation among State Parties, with the intention of enhancing law enforcement actions against the offenses specified within the treaty. By casting too wide a net, it allows cooperation on “offenses covered by this Convention” (which includes the infamous Article 17), and bypasses the need to apply the conditions and safeguards under Article 23(1) and 24. Limiting Article 47(1) so that it applies only to cooperation on offenses set out in the Convention and ensuring critical safeguards and limitations apply is a crucial condition for any law enforcement cooperation.

Article 47(1)(b)(f): Delete articles that bypass safeguards embedded in the MLA. Together with Privacy International, we’ve called for the deletion of Articles 47(1)(b) and (f), aiming to prevent State Parties from sharing personal data in ways that bypass thHoe safeguards embedded in the MLA. States should not leverage the  treaty to authorize or require personal information sharing outside the bounds of the existing MLA, the safeguards established MLA vetting mechanism: The central authority. Such safeguards should not be removed without providing comparable protections and limitations.

Safeguards removal invites misuse of the MLA framework for transnational repression. Moreover, Article 24 does not apply to the international cooperation chapter, and the current wording of Article 36 does not specify any minimum data protection principles, therefore the protection afforded to sharing of personal data under this article is insufficient. Moreover, the data in question has the potential to reveal the location of an asylum seeker or political dissidents, inviting misuse of the criminal MLA framework.

Article 47(1)(c): Delete artificial intelligence, inferences, databases—fuzzy  terms have far-reaching  implications. Article 47(1)(c) outlines the requirement for State Parties to engage in close cooperation, specifically focusing on the provision of "necessary items or data for analytical or investigative purposes'' when deemed suitable. Notably, as explained for Chapter 4,  this provision lacks precision, as again, it isn't linked to specific investigations or law enforcement proceedings. Additionally, nothing in this provision excludes the sharing of "personal data," including biometric data, "traffic data," or other categories like location data, which could potentially lead to sharing intrusive data without a specific assistance request.

Moreover, the provision's complete lack of scope limitation or target specification can serve as an authorization for cross-border law enforcement sharing of massive biometric databases or artificial intelligence training datasets, as our ally Article 19 pointed out. The potential serious human rights implications of such unchecked data-sharing are enormous. Biometric data, facial recognition and voice recognition systems have been used in various countries to identify, surveil, and persecute protesters, minorities, migrants, human rights defenders, journalists, and opposition leaders. The convention should not be the opportunity to escalate these dangerous patterns beyond borders. 47(1)(c) therefore raises similar concerns to Article 40(4), granting a State the ability to share "information relating to criminal matters" without necessitating a formal request.

Conclusion: Broadly scoped, ambiguous, and nonspecific international cooperation measures with few conditions and safeguards are simply a recipe for disaster that can put basic privacy and free expression rights at risk. As it stands, the treaty’s international cooperation chapter, or as we called it, the “international spying or cross border spying Chapter,” sorely lacks the robust safeguards and personal data protections needed to fill bg holes in the text that can easily be exploited when governments want to go after journalists, human rights defenders, and dissidents.

[1] See Evidence, personal data, “data stored by means of a computer system” [40(3)(d)], “information” [40(3)(h), 47], “expert evaluations” [40(3)(h)], “information relating to criminal matters” [40(4)], “government records, documents, or information” [40(30)(b)].
[2] Budapest Convention, explanatory report, para 182: "As the powers and procedures in this Section are for the purpose of specific criminal investigations or proceedings (Article 14), production orders are to be used in individual cases concerning, usually, particular subscribers. For example, on the basis of the provision of a particular name mentioned in the production order, a particular associated telephone number or e-mail address may be requested. On the basis of a particular telephone number or e-mail address, the name and address of the subscriber concerned may be ordered. The provision does not authorize Parties to issue a legal order to disclose indiscriminate amounts of the service provider’s subscriber information about groups of subscribers e.g. for the purpose of data-mining."


Katitza Rodriguez

EFF Benefit Poker Tournament at DEF CON 31

4 weeks 2 days ago

August marked the return of DEF CON, the world’s largest computer hacking conference. That means it was also the return of the EFF Benefit Poker Tournament, an official DC31 Contest hosted by security expert and EFF advisory board member Tarah Wheeler.

Fifty-one EFF supporters and friends played in the charity tournament on Friday, August 11 in the Horseshoe Poker Room at the heart of the Las Vegas Strip.

Before the tournament, Tarah and her father, professional poker player Mike Wheeler, hosted a poker clinic to teach basic strategy to those new to the game. Rookie players learned how to raise preflop, not go all-in on a draw, and many more tips that helped them throughout the tournament.

Emcee Ohm-I kicked off this year’s tournament. The Seattle hacker and hip hop artist thanked everyone for coming, shared his experience playing poker on the N64, and announced that it was time to “Shuffle up and deal!"

Special guest, and last year’s emcee, Jen Easterly, dropped by to wish everyone good luck.

Celebrity players included not only Tarah and Mike Wheeler, but MalwareJake and Deviant Ollam. Each played with a bounty on their head – a special prize to go to the player that knocked them out of the tournament.

After the first hour of play, Brandon Perrodin knocked out Deviant, winning a Flipper Zero.

In the second hour, Kyle Chamberlin took the last of MalwareJake’s chips, winning a hat with a scrolling sign reporting “I pwned MalwareJake for charity” and Erick Hammersmark had the honor of knocking out Tarah. In hour three, Just before the final table, Matricii took down Mike Wheeler, winning a prize from Tarah–a donation from her to EFF made in his name.

After four hours of play, three players remained: Matricii, Matt Williams and Patrick Ecord. Ecord as short stack, seemed destined for third place, but “a chip and a chair” became the refrain as he doubled up twice, outlasting Williams.

After trading blinds back and forth, Matricii and Ecord both went all-in. Ecord’s pair of sixes couldn’t keep up to Matricii’s queen-ten when the flop had a 10 and the river a queen.

Every player received a bronze challenge coin crafted by Tarah herself. Matricii, the tournament champion, also took home a solid silver challenge coin as well as the now-traditional jelly bean jar trophy.

It was an exciting afternoon of competition raising over $17,000 to support civil liberties and human rights online. We hope you join us next year as we continue to grow the tournament. Follow Tarah to make sure we have a chip and a chair for you at DEF CON 32.

More Poker Pics!

Thanks to the players for making this a fantastic tournament

Daniel de Zeeuw

Vulnerability in Tencent’s Sogou Chinese Keyboard Can Leak Text Input in Real-Time

1 month ago

Security researchers at Citizen Lab discovered a number of cryptographic vulnerabilities in the Sogou Input Method keyboard software made by Tencent, the most popular input method in China. These vulnerabilities allow adversaries with a privileged network position (such as an ISP or anyone with access to upstream routers) to read the text a user inputs on a device in real-time as it's being typed. Users of the Sogou Keyboard are highly encouraged to upgrade to patched versions that fix this vulnerability:

  • Windows >= version 13.7
  • Android >= version 11.26
  • iOS >= version 11.25

The report shows the Windows and Android implementations were vulnerable to eavesdropping, while the iOS version wasn’t. Of particular note, Sogou Input Method has around 450 million monthly active users worldwide. It's used not only in China, but also has a large userbase in the United States, Japan, and Taiwan. It is not known if this vulnerability was previously discovered or exploited. However, given the level of network access and broad latitude afforded to state authorities within China, it’s possible that users of the keyboard (especially those located within China) may have had their private communications leaked to the Chinese state.

Home-rolled Cryptography Strikes Again

The researchers found  this vulnerability was due to the use of custom cryptography vulnerable to a padding oracle attack. Implementing cryptographic algorithms is an extremely precarious and rigorous effort. Even when done relatively well, a side-channel attack can undo the basic guarantees these algorithms are meant to provide. Best practice dictates that well-vetted cryptographic libraries which are made available by the system—rather than coded on one’s own—should be used to avoid these attacks and ensure the latest protections are available against weaknesses. As of 2003, the vulnerabilities in this particular implementation were already fixed in TLS implementations.

We applaud the scrupulous cryptanalysis and reverse-engineering work done by the security researchers Jeffrey Knockel, Zoë Reichert, and Mona Wang (who formerly worked at EFF). By bringing these vulnerabilities to light, public-interest analysts serve as a bulwark against the secretive hoarding of vulnerabilities by authorities and deployment of them as a spying tool used to invade the privacy of us all. Only by responsibly disclosing and publicizing these flaws can they be fixed, and can the general public make informed decisions about what software they wish to use in the future.

Bill Budington

Media Briefing: As UN Cybercrime Treaty Negotiations Enter Final Phase, Time is Running Short to Bolster Human Rights Protections

1 month ago
Draft Text Enhances Government Surveillance Across Borders but Offers Weak Checks and Balances

New York—On Wednesday, August 23, at 1:30 pm Eastern Time (10:30 am Pacific Time) experts from Electronic Frontier Foundation (EFF), Human Rights Watch, and four international allies will brief reporters about critical flaws in the draft UN Cybercrime Treaty that threaten human rights.

The treaty, under negotiation by UN Member States for more than a year, is intended to foster international cooperation against cybercrime. It will facilitate the rewriting of criminal laws around the world, potentially expanding the criminalization of online speech and cross border surveillance by law enforcement.

Without strong human rights safeguards, the draft treaty could severely undermine the privacy, freedom of expression, and other fundamental rights of millions of people, in particular journalists, activists, and persons and groups facing discrimination and marginalization.

Speakers at the briefing include experts on human and digital rights who are participating in treaty negotiations as observers. They will highlight key concerns emerging in the first week of the treaty’s sixth negotiating session, where the draft text will be reviewed. The session is scheduled at the UN in New York from August 21 through September 1.

The briefing will be livestreamed from New York. Reporters are invited to attend in person or participate online to ask questions. Registration details for online and in-person participation are below.

Media Briefing on critical human rights issues at stake in the draft UN Cybercrime Treaty.
To join the news conference remotely, please register from the following link to receive the webinar ID and password:
Media accreditation for members of the press attending in person can be obtained here: 

Deborah Brown, Senior Researcher and Advocate on Technology and Rights, Human Rights Watch
Raman Jit Singh Chima, Global Cybersecurity Lead and Senior International Counsel, Access Now
Victor Kapiyo, Lawyer and Human Rights Defender, Kenya ICT Action Network
Katitza Rodriguez, Policy Director for Global Privacy, Electronic Frontier Foundation
Carey Shenkman, Human Rights Attorney, Article 19
Ioannis Kouvakas, Senior Legal Officer and Assistant General Counsel, Privacy International

Wednesday, August 23 at 1:30 p.m. EST/10:30 am PST/18:00 GMT

United Nations Correspondents Association (UNCA) Briefing Room
United Nations Secretariat Building
405 East 42nd Street
Room S-308
New York, NY 10017

For a UN Cybercrime Treaty timeline:

For more about the UN Cybercrime Treaty:

Contact:  KatitzaRodriguezPolicy Director for Global DeborahBrownSenior Researcher and Advocate on Technology and Rights, Human Rights NatashaSchmidtArticle
Karen Gullo

Digital Rights Updates with EFFector 35.10

1 month ago

Need to catch up on the latest in the digital freedoms movement? EFF has you covered with our EFFector newsletter, featuring updates, upcoming events, and more! Our newest issue covers work around various censorship bills like KOSA, the illegal spying law Section 702, and features our thoughts about surveillance and self-driving cars, and much more.

Learn more about the latest happenings by reading the full newsletter here, or you can listen to the audio version below!

Listen on YouTube

EFFector 35.10 | Shielding Us All From Prying Eyes

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Portland's TA3M: Expanding the Scope of Their Work in PDX

1 month ago

Techno-Activism 3rd Mondays (TA3M) is an informal meet-up designed to connect software creators and activists who are interested in issues like censorship, surveillance, and open technology. Portland’s TA3M continues to focus on educational events and recently expanded that focus to include privacy, security, and sometimes other tech-related topics. Here, EFF speaks with Electronic Frontier Alliance member Portland’s TA3M about their work, lessons learned and what’s to come on the horizon.

What have been some of the issues you've concentrated on and what were some of your early successes?

Over the years, we’ve tried to do a mix of topics, helping people to understand local policies that might affect them, ways to protect their personal information, and issues related to new technologies that were coming into wider use at the time. We’ve hosted events about face recognition, gunshot detection technologies, surveillance technologies, ordinances, consumer privacy, artificial intelligence, and how to request public records. We’ve also brought in government representatives to talk about their privacy-related work.

In the BeforeTimes, we had many great privacy-related talks, including a few with privacy all-stars, like Clare Garvie and Kashmir Hill, and our biggest event ever was an in-person talk by Cyrus Farivar back in November 2019. He spoke about several legal cases and how they’ve impacted our privacy rights. One of our more frequent guests, though, is Hector Dominguez of Smart City PDX. He periodically shares his work at the City of Portland with us, detailing their efforts to increase privacy protections and transparency in the city.

Can you tell us about some of your current projects?

Most recently, we’ve had events concerning police accountability, public banking, and the City of Portland’s effort to create a surveillance technology inventory.

And, within the past year, we’ve also hosted events related to gunshot detection systems, online voting, and [international] trade agreements. Trade agreements, in particular, probably seem not very privacy-related, but the international data-sharing policies established in such pacts can result in a lowest-common-denominator outcome, where our personal information is subject to minimal standards set by the countries with the weakest privacy protections in place.

What has your group learned in your popular education work?

I’ve heard EFF staff stress a number of times that having regular meetings is important for keeping people engaged, and I think that’s true! Conveniently, our name—Techno-Activism 3rd Mondays—advertises our event dates automatically. While, due to scheduling, weather, or unforeseen issues, we occasionally have events on other days, we generally try to set our events for the 3rd Monday, so people can plan for them.

We’ve also learned the importance of meeting people where they are. Even though we’ve done more educational awareness types of events than official training sessions thus far, we still have attendees with different experiences and understandings of the various topics, so we try to make all of our events accessible and welcoming to people at all levels of knowledge.

What connects you personally to the work?

I really care about privacy and what happens when our privacy is lost. The fact that more and more of our personal data is collected, shared, and monetized without our consent, or even our knowledge, is very troubling to me. I don’t want to live in a world where every action and thought we have is collected and stored and can then be used in ways that align with someone else’s interests, rather than our own—whether those interests belong to governments, corporations, or malicious actors who wish to take advantage of our personal preferences and quirks and our human limitations.

I don’t want the world to be a rigged game controlled by the rich and powerful who use our personal information as an opportunity for their further enrichment.

What are the technological challenges for Portland’s TA3M?

Hosting meetings has been very challenging at times! NW Academy had been hosting our in-person meetings at the school for at least a few years, but when the pandemic arrived, we had to switch to online meetings. The school continued to host us through 2021, but we had to find a new option the following year, both because the school was not renewing their Zoom subscription and because we wanted a more privacy-focused option.

These days, we’re using BigBlueButton, and it seems to be working pretty well for us.

What's next on the horizon for Portland’s TA3M? Are there things that your group has wished to prioritize in the past and you're now putting back on the agenda?

We would love to do some hands-on workshops! We had been planning to do one before the pandemic arrived, and we’re still hoping to once we start meeting in person again. We’d like to give people a chance to interact with various technologies rather than only have events where attendees mostly listen. To be sure, we’ve had many interesting and very knowledgeable speakers; we just want to mix things up a bit with some hands-on events as well. 

We’ve also not had any in-person meetings since March 2020. We had been looking to at least get people together for a social event, and we’re hoping to finally do that this fall with an outdoor privacy happy hour. While I try to get ideas for topics and speakers from the group, I’m also always on the look-out for people doing work related to our areas of focus.

Has the work of Portland’s TA3M led to other work?

Portland’s TA3M has mainly focused on education, but we’re very fortunate to have a few other EFA groups here in Portland as well—Personal Telco Project, Encode Justice Oregon, and PDX Privacy—because, by collaborating, we can cover more issue areas than we could just working alone.

A while back, Russell Senior, from Personal Telco Project, spoke with us about their municipal broadband initiative, and we’re now engaging with Encode Justice Oregon about doing a presentation at one of our meetings this fall. We’ve also co-hosted several events with PDX Privacy. They’re more active in the policy-related privacy happenings in the city, and they’ve worked with us to put together panels and other events related to technologies under consideration by the city, such as face recognition and gunshot detection systems.

How would someone contact you and do opportunities exist for people to get involved? 

Yes! We’re always looking for ideas, speakers, and help in organizing our events.

Most of our events so far have been topics that I found to be interesting, urgent, or concerning, and I thought (and hoped!) other people might also care. So, we’ve sought out people who could share knowledge about those topics. If someone has privacy-related work they want to share, we’re interested in learning more and potentially having them share their work with our group. We don’t want to promote for-profit products or anything like that, but topics that illustrate how tracking and surveillance work, or tips for circumventing unwanted data collection—things like that—we’d love for people to connect with us.

To get in touch with us please email and our meetup page is:

Christopher Vines

The Industry Discussion About Standards For Bluetooth-Enabled Physical Trackers Is Finally Getting Started

1 month 1 week ago

Bluetooth-enabled location trackers such as Tiles and AirTags aren’t just a helpful way to find missing luggage or a misplaced wallet—they can also be easily slipped surreptitiously into a bag or car, allowing stalkers and abusers unprecedented access to a person’s location without their knowledge. At EFF, we have been sounding the alarm about this threat to people, especially survivors of domestic abuse, for a long time.  

Now, there’s finally an industry discussion happening about the best methods of preventing unwanted trackers. The most effective way to prevent physical trackers from being used as stalking devices against most people is through tracking alerts. If a physical tracker is out of range of the phone that it is paired to, and it’s moving with you, you should get an alert about it. 

Apple rolled out AirTags with some rudimentary anti-stalking mitigations: a tracking alert that worked for iPhone users and a beep from the AirTag that was worryingly easy to muffle or disable and which did not go off until the AirTag had been out of range of the phone it was paired to for three days. Since then, Apple has improved its mitigations by cutting down the time until the beep goes off and by putting out an Android app that can be used to scan for unwanted AirTags in the vicinity. In the meantime, Tile took one step forward by adding tracker detection to its app, and then one step back by creating an “anti-theft mode” that turned that detection off. As of right now, none of the other physical trackers on the market have any anti-stalking mitigations at all. 

Recently, Google announced that it was rolling out Bluetooth tracking detection for Android. The new capability only detects AirTags at the moment, but it’s still a major step forward for people who may be followed by physical trackers. Android users will no longer have to download an app and run a scan to detect unwanted AirTags—it will all happen in the background.  

Detecting AirTags is just the beginning. What about every other Bluetooth-enabled physical tracker on the market? Google and Apple have proposed a solution: a standard for all physical tracker manufacturers to agree on which would make them detectable by default on iOS and Android phones. This standard could be great news, resulting in increased safety for an untold number of vulnerable people. But the details matter. There are some hard questions and a need to refine the companies’ new joint industry specification that dictates how a Bluetooth tracker detection can remain consistent. That is the purpose of the Internet Engineering Task Force (IETF) Draft on Detection of Unwanted Location Trackers (DULT). 

IETF Event 

The Internet Engineering Task Force (IETF) is a body that discusses, drafts, and publishes protocols that largely dictate how the internet functions. In July, the IETF convened for a week and among the many discussions was the creation of the “Detection of Unwanted Location Trackers” or the DULT draft. The event brought together phone and device manufacturers, EFF, and other technologists who had weighed in on the IETF mailing list. 

You can get the full meeting transcript here. There are a few points that we think are particularly important to keep in mind for the future of this proposed standard:

Privacy & Protection of People over Property 

It is impossible to make an anti-theft device that does not alert the thief that they are being tracked without also making a perfect tool for stalking. Apple is careful not to advertise the AirTag as an anti-theft device for this reason, but other makers of physical trackers such as Tile explicitly bring up anti-theft as a use case for their product. If physical trackers are going to have effective anti-stalking mitigations, then manufacturers need to give up on the anti-theft use case predicated on unknowingly tracking the thief and sneaking up on them. You cannot have both. EFF believes that people are more important than property, and we hope that the companies will come to agree.  In any use cases that get defined in the specification, the security of those who do not want to be unknowingly tracked should be prioritized over the ability to track the location of stolen items.  

Additionally, unwanted physical trackers should be accountable. Manufacturers should store the bare minimum of information about the phone or account that the tracker is paired to, and they should store it for a time and in a manner consistent with their data retention policies. The information should be made available to others only in response to a valid court order. 

Any standard should also protect the privacy of the owners of physical trackers. We are concerned that having physical trackers rotate their identifiers only once a day will provide insufficient defense against a sophisticated tracking network.  Weak privacy protections for a device that is frequently attached to keys and wallets could be used for location tracking by unscrupulous governments, law enforcement, and private actors.   

Fair Doesn’t Mean Free 

Apple has listed several patent disclosures that the company claims apply to this specification. That’s a way of notifying competitors, and the public, that Apple believes it owns patents that cover the use of this technology. That means Apple could, in the future, choose to charge patent royalties to anyone using this technology, or file a patent infringement lawsuit against them.  

The decision to assert patents over this specification is unnecessary and unfortunate. The public will suffer a significant loss if Apple asserts that it has patent rights to what should be an open, free repository of information meant to help companies and everyday people prevent stalking and malicious tracking.  Apple could threaten or sue people who use agreed-upon technology to prevent unwanted tracking.  

Apple stands alone in its insistence that it may use intellectual property rights to threaten people with patent lawsuits, or demand fees, for using privacy-protecting technology. The IETF convening included Samsung, Google, Mozilla, and many other patent-owning entities, all of whom chose not to engage in this type of threatening behavior.  

Apple’s decision to bring patent rights into this conversation is disappointing. The company should withdraw its patent disclosures and make a public statement that it won’t make intellectual property claims against companies or users who don’t want to be surreptitiously tracked.  

The technology required for Detecting Unwanted Location Trackers can, and should, be free to all.  

Encryption in the Light 

One of the most curious parts of the specification is the portion that addresses the “proprietary payload.” Since this involves Bluetooth, a short-range wireless technology, the draft addresses methods of communication between the Bluetooth location trackers and the networks they are attached to. Communication and interoperability of location trackers are left to individual company implementation in the proprietary payload. For example, both Apple and Google have proprietary ways to accomplish secure, encrypted communication between their network and location trackers. However, we would like to see a more open consensus on how this is accomplished and avoid industry fracture on something like secure communication for location trackers. 

As we participate in the shaping of this draft into a standard, we hope to see more thoughtful discussions like this occur before new products get introduced that could endanger people. While we can’t turn back time, everyone involved in the location tracker industry business has the responsibility to create a safeguard for people; and not just their lost keys. 


Alexis Hancock

Jordan's King Should Reject the Country's Draft Cybercrime Law

1 month 1 week ago

Jordan’s parliament recently passed a new cybercrime law that will severely restrict individual human rights across the country. With the law now heading to Jordan’s Senate and Jordan’s King Abdullah II for final approval, EFF and 18 other civil society organizations have written to the King of Jordan urging the rejection of the country’s draft 2023 Cybercrime Law. 

The law has been issued hastily and without sufficient examination of its legal aspects, social implications, and its impact on human rights. More specifically, it imposes penalties of imprisonment and hefty fines for vague and unspecified crimes such as ‘character assassination,’ ‘spreading false news,’ and ‘blasphemy.’ 

The draft cybercrime law also grants unrestricted authority to the public prosecutor and the executive authority to block social media platforms and issue orders to control their content without the need for a judicial decision—limiting access to specific platforms in Jordan. Additionally, the law imposes restrictions on encryption and anonymity in digital communications, preventing individuals from safeguarding their right to freedom of opinion and expression and their right to privacy.

We urge the King of Jordan to reject the 2023 Cybercrime Law until there is sufficient consultation on its provisions with individuals, civil society, and political parties to ensure its compliance with human rights and address the existing shortcomings.

The letter continues:

The law in its current form—with its loosely-defined and open to interpretation terminology—will inevitably become a tool for prosecuting innocent individuals for their online speech.

We believe that many provisions of the law allow for unjust or unnecessary pre-trial detention, and provide no guarantees for the rights of the affected individuals. This constitutes a violation of Article 9 of the International Covenant on Civil and Political Rights, which states that “Anyone who has been the victim of unlawful arrest or detention shall have an enforceable right to compensation.”

Paige Collings

Dissecting the UN Cybercrime Convention’s Threat to Coders’ Rights at DEFCON

1 month 1 week ago

This is Part IV in EFF’s ongoing series about the proposed UN Cybercrime Convention. Read Part I for a quick snapshot of the ins and outs of the zero draft; Part II for a deep dive on Chapter IV dealing with domestic surveillance powers; and Part III for a deep dive on Chapter V regarding international cooperation: the historical context, the zero draft's approach, scope of cooperation, and protection of personal data.

The proposed UN Cybercrime Convention could shatter security, and harm political and social activists, journalists, security researchers, whistleblowers, and millions more around the world for decades to come, we told a packed house at DEFCON in Las Vegas on Thursday - but it’s not too late to stop this bad treaty from being adopted.

Delegations from Member States as well as observers from civil society will convene August 21 at UN Headquarters in New York City for a two-week negotiation session on the convention’s “zero draft.” The zero draft is the first full text, the result of State-led negotiations that began in February 2022. EFF will be there again this month to lobby Member States and provide expert opinion to ensure the protection of your rights.  If the Member States can’t reach total consensus on the text, it could go to a vote by the Member State governments in which a two-thirds majority would be required for adoption. A concluding session is scheduled for early next year in New York City.

At DEFCON, we highlighted the foremost dangers posed by the zero draft, and the direction in which negotiations seem to be headed. The proposed treaty features five chapters: criminalization, or the categorization of acts deemed a crime under this treaty; domestic and cross-border spying powers, for example, the powers and limits to conduct surveillance both within their borders and across international boundaries; and two additional chapters on technical cooperation and proactive measures.

Our DEFCON talk focused on the computer crimes that could potentially affect security researchers––those programmers and developers engaged in cutting-edge exploration of technology. Security and encryption researchers help build a safer future for all of us using digital technologies, but too many legitimate researchers face serious legal challenges that inhibit their work or prevent it entirely. EFF has long been fighting for coders’ rights––in courtrooms, congress and global policy venues. It’s a cause close to our heart.

The section on criminalization, for example, is extremely worrisome. It references a list of specific crimes, borrowing language from the flawed Budapest Convention. If the final text gets consensus approval, it could obligate 194 member states to incorporate these crimes into their domestic legislation. This will pave the way for nations to harmonize these core cybercrimes across the world and to easily assist others in surveillance on targets related to these crimes. While these core cyber crimes have been debated for years in the U.S., leading to significant advancements,  these progressions can’t be automatically applied universally. Organizations like EFF have spent years advocating for these legal reforms, yet the capacity to influence a country's legal system varies widely among nations: In some places it’s impossible, in others litigation can be riskier or costlier. This is why our aim is to incorporate these safeguards into the draft treaty so every country abiding by it must include them in their domestic legislation.

EFF and other organizations have urged Member States that this treaty’s scope be limited only to “core cybercrimes,” such as specific, technical attacks against computers, devices, and communications systems. But the zero draft is a veritable Swiss cheese of loopholes that would make a cybercrime of any crime that is committed with technology and is covered by any other treaty that the country has ever acceded to— think drug trafficking, for example. This could potentially extend to even more obscure treaties or any treaty adopted in the future. Essentially, Article 17 could compel states to recast traditional crimes as cybercrimes. Applying physical world legal frameworks to digital conduct is bad legislative practice that could create more harm than good. States may miss the nuance that’s needed to distinguish between digital and real-world crimes. Together with Article19 and others, we are fighting to remove Article 17 from the proposed treaty.

The zero draft’s Article 22, regarding jurisdiction, is also concerning. It would let a nation claim authority over any of these core cybercrimes if they occur within its territory, or aboard its vessels or aircraft, but also if the offense involves its nationals either as perpetrators or victims or is committed against the state itself. It’s a jurisdictional nightmare that once claimed against a security researcher could easily be twisted to repressive political ends by undemocratic governments or can force the applications of domestic laws to corporations that could be disproportionate and arbitrary in nature. We don't think the proposed treaty is the place to deal with jurisdiction.

Another concerning provision is Article 6. It mandates each nation to legislate and implement measures ensuring that unauthorized access (or access without right) to either a computer system or information and communication technologies —whichever term gets adopted—is a criminal act when done with intent. While the text grants nations the flexibility to decide when to criminalize unauthorized access, such as in cases of breached security measures or dishonest intentions, these specific conditions are left largely to the individual states' judgment and definitions. The text fails to require that any cybercrime acts under Article 6 and 10 should cause serious harm or damage to qualify for action under the treaty.  There's also no stipulation that the breached security measure must be effective. We strongly argue that only breaches of effective security measures should be a mandatory criteria for criminalization. This request is consistent with EFF’s domestic advocacy on the Computer Fraud and Abuse Act —avoiding arguments that bypassing an IP block is unauthorized access, for example—and with the several complaints EFF and our allies have made in our oral and written interventions during the negotiations in Vienna.  The text also lacks any kind of public interest exception to protect whistleblowers, journalists or security researchers.

Also, the vague concept of doing things “without right” could threaten to elevate private business disputes—based on rules and terms written by providers, not legislatures—to criminal activity. Again, this concern is consistent with EFF’s domestic advocacy. In the Supreme Court's Van Buren case for example, the Justice Department argued that a police officer who used a law enforcement database for an unauthorized purpose engaged in authorized access because his use was not allowed in the applicable use policy. This is arguably "without right," as would be any cases where the owner of the computer argues the user "should have known” their use was unauthorized, such as when the owner fails to protect an area of a website that is obviously supposed to be private and instead makes it publicly accessible. Consistent with our arguments, the Court rejected such an assumption, and adopted a “gates up or down" approach: Either you are entitled to access the information, or you are not. This initial assumption could criminalize journalism that involves using obscure but publicly available information online. The treaty must include safeguards against this.

The draft also makes a mess of dealing with tools and data used in security research or for other non-criminal, everyday purposes. For example, Article 10 of the zero draft discusses “misuse of security tools” that could conceivably apply when your mom shares her Netflix password with you: It’s a breach of the terms of service, which is access without right. So again, the treaty could be turning private disputes into criminal liability.

But what could be worse about the zero draft is its threat to security.

The treaty’s vaguely-written Article 28, containing an expanded version of a Budapest Convention provision on compelled assistance, could be interpreted to order people who have knowledge or skills in breaking security systems to help law enforcement break those systems. This must be removed, lest the power even be interpreted to include compelled disclosure of vulnerabilities and private keys. Security is hard enough; government mandates to help break security won’t make things better.

To be honest, many people around the world don’t spend a lot of time worrying about what the United Nations is up to. In this case, however, they definitely should: Treaties are binding upon signatory countries, who are obliged to comply. They become part of international law, and in the United States, treaties have the same force as federal law. Bad treaties are an end run around thoughtful, democratic domestic political processes.

We were gratified to see that thousands of DEFCON attendees get the message that the futures of hacking, cybersecurity, and human rights are at risk. With negotiations re-convening in just a few short days, it’s crucial that everyone lift their voices to ensure this proposed treaty doesn’t set human rights and tech law back by decades.

Katitza Rodriguez

Federal Judge Upholds Arizonans’ Right to Record the Police

1 month 1 week ago

The Arizona legislature last year passed a law (H.B. 2319 codified at A.R.S. § 13-3732) banning the video recording of police activity within eight feet of officers, making doing so a class 3 misdemeanor (which would allow for up to 30 days in jail). The law included some exceptions, such as for “a person who is the subject of police contact.”

A coalition of news organizations and the ACLU of Arizona sued state and county government officials in federal court arguing that the law was unconstitutional. EFF filed an amicus brief in support of the plaintiffs in the district court.  

We are happy to report that the court in the case, Arizona Broadcasters Association v. Mayes, recently entered a stipulated permanent injunction in favor of the plaintiffs, pursuant to a settlement between the parties. The order prevents Arizona government officials from enforcing the law.

The court’s order includes strong language in favor of the right to record the police. The order declares that the law violates the First Amendment because “there is a clearly established right to record law enforcement officers engaged in the exercise of their official duties in public places.” This conclusion reflects Ninth Circuit precedent (of which Arizona is a member), as well as that of several other circuits.

Importantly, the court also applied strict scrutiny, a high standard for government officials to meet if they want to regulate speech. The court’s order concluded that “the statute does not survive strict scrutiny because it is not narrowly tailored or necessary to prevent interference with police officers given other Arizona laws in effect.”

The court’s order reflects the reality that citizen recordings have been critical to police accountability, most notably in the case of George Floyd who was murdered by Minneapolis police officers in 2020. Unfortunately, other state legislatures are pursuing similar laws—which we urge governors to veto. We must preserve a critical tool for citizens to hold those with the ultimate power—the authority to engage in lethal force—accountable.

Sophia Cope

The U.S. Government Wants To Control Online Speech to “Protect Kids”

1 month 1 week ago

The Kids Online Safety Act (KOSA), a bill that allows for a wide range of government penalties for online speech, could soon be passed by Congress. If that happens, the access we have to information may be forever changed. KOSA will make state prosecutors and federal bureaucrats the final arbiters of online content moderation in the U.S. 

KOSA is fundamentally a censorship bill. Politicians are justifying it by harping on something we all know—that there’s content online that’s inappropriate for kids. But instead of letting tricky questions about what online content is appropriate at what age be decided by parents and families, politicians are stepping in to override us. 


say no to state-controlled internet in the U.S. 

The U.S. Government Will Ban “Depressing” Content

The heart of the KOSA bill is a “Duty of Care” that the government forces on every website, app, social network, message forum, and video game. (It’s Section 2 in the bill text.) KOSA will compel even the smallest online forums to take action against content that politicians believe will cause minors “anxiety,” “depression,” or encourage substance abuse, among other behaviors. 

Of course, almost any content could easily fit into these categories—in particular, truthful news about what’s going on in the world, including wars, gun violence, and climate change. Kids don’t need to fall into a complex wormhole of internet content to get anxious; they could see a newspaper on the breakfast table. 

Bad feelings are also not exclusive to internet media. For many decades, newspaper and magazine style and advertising sections have promoted unrealistic or unattainable visions of what we should own, what experiences we should have, and what our bodies should look like. 

Coping with this isn’t easy on anyone’s mental health, whether minors or adults. But we don’t expect news organizations to “prevent and mitigate” depression and anxiety, and we wouldn’t stand for the government suing newspapers for depressing kids. People have a right to access information—both news and opinion— in an open and democratic society. To “prevent and mitigate” self-destructive behaviors we have to look beyond the media, to systems that allow all humans to have self-respect, a healthy environment, and healthy relationships. 

KOSA Throws Out Good Speech With “Bad” 

KOSA will punish people for having online conversations. It empowers every state’s attorney general as well as the Federal Trade Commission (FTC) to file lawsuits against websites or apps that the government believes are failing to “prevent or mitigate” the list of bad things that could influence kids online. 

But it’s impossible to filter out this type of “harmful” content, and anyone who tries will be trapped. The news might depress us or make us anxious; but discussing it might also lead to positive solutions. Talking about depression—in an online forum, or with a therapist—might make a person more depressed. But it’s also a road toward healing. Similarly with discussions of substance abuse. We can’t fight against what we can’t talk about. 

People can have legitimate disagreements about what speech is good, bad, or “harmful.” And we do. What we don’t allow, under the First Amendment, is for the government to haul people into court for having difficult conversations. 

KOSA allows exactly this. The censorship in the bill is so obvious that the bill is completely reliant on (justifiable) anger towards big tech companies to propel it forward. It appends a considerable list of websites that don’t qualify as “covered platforms”, including schools, libraries, news organizations, and nonprofits. But you shouldn’t have to hide in a library to speak freely. 

There’s Only One Internet, and KOSA Will Censor All of It 

KOSA’s promise to leave the “adult” internet alone is an utterly empty one. There’s no real way to apply these rules only to minors without creating a special “kids site”—and even then, a website operator will have to be worried about government action. It is likely to see some teenagers who lie about their age, or just stay quiet about it. EFF opposes mandatory age-verification, which is a bad idea for many reasons, including the fact that it takes away adults’ right to talk to each other anonymously. 

Realistically, under KOSA, there’s no way to not censor. Websites that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave. If one kid gets through or just ignores the rules, the U.S. speech police will come knocking. 

KOSA isn’t Different From Removing “Depressing” Books Out Of The Library 

We all know there’s content online that’s harmful, and inappropriate for kids. Ideally, parents and families should decide what online content is appropriate for what age, and what is off-limits. Every day, parents and kids (and all adults) make decisions about what to view, and whether and how much to limit screen time. These personal and family decisions are incredibly important, but we’ve never allowed the government to set rules and punishments around them—until now, if KOSA passes. 

KOSA is also a direct attack on minors who want to learn about their world on their own terms, and to speak out about it. The right to youth speech and youth activism has been strongly protected in the U.S. for more than 50 years. Youth can talk to each other and adults in ways that make us uncomfortable, or some see as inappropriate. The Supreme Court protected a 14-year-old student’s right to free speech in 2021 when it allowed her to denigrate a school athletic team (a case that EFF weighed in on). These are the same rights that were protected in 1969, when the Supreme Court said that two kids, aged 16 and 13, couldn’t be punished for protesting the Vietnam War with black armbands. 

Now we have a group of lawmakers harnessing fears over kids’ safety to suggest that the internet is a completely different world. They propose a world in which depressing or socially difficult conversations—the curse words, the black armbands, the wars—will be whitewashed, in the name of kids’ mental health. 

Members of Congress aren’t qualified to tell people what to read—kids or adults, online or offline. We wouldn’t let attorneys general remove books from a school library because they could be depressing or promote substance abuse. We shouldn’t let them have such censorial power over the internet, either.  


TELL CONGRESS you won't accept internet censorship

The Senate has recently passed amendments to the law, which do not resolve the issues we’ve laid out. It isn’t too late to stop KOSA, and both adults and young people have been speaking out and will continue to do so. The bill hasn’t been voted on by the full Senate or considered in the House of Representatives. 

It’s time for Congress to listen to the vast majority of people who use a free and open internet to make their lives better for themselves, their kids, and their families. 

Some elected officials are clearly starting to get it. Rep. Maxwell Frost (D-FL), the youngest member of the House of Representatives at age 26, has said in an email to his constituents that he opposes the bill, and that it could be used to censor LBGTQ+ content, or HIV prevention information. He adds: 

Proposals that involve filtering or identification requirements on sites, like the Kids Online Safety Act (KOSA), would have unintended consequences that undermine our goal of an enriching and educational Internet experience and far outweigh their benefits. They jeopardize kids’ privacy through increased data collection and promote inappropriate parental surveillance which can keep children experiencing domestic abuse from seeking help. 

We hope more members of Congress will understand that KOSA is a censorship bill that will put kids in danger, not help them. 


OPPOSE internet censorship

Joe Mullin

The Proposed Cybercrime Treaty's Approach to Cross-Border Spying

1 month 1 week ago

This is Part III of EFF’s ongoing series about the proposed UN Cybercrime Convention. Read Part I for a quick snapshot of the ins and outs of the zero draft; Part III for a deep dive on Chapter V regarding international cooperation: the historical context, the zero draft's approach, scope of cooperation, and protection of personal data, and Part IV, which deals with the criminalization of security research.

The United Nations Headquarters in New York City is poised to become the epicenter of one of the most concerning global debates affecting human rights in the digital age. Starting August 21, delegates from around the world will gather for an intense two-week session to scrutinize the highly controversial “zero draft” of a UN Cybercrime Convention that could compel states to redefine their own criminal and surveillance laws on a global scale. 

Though the first negotiated text of the proposed convention, the “zero draft" is deeply flawed, the principle that “nothing is agreed until everything is agreed” applies here. EFF will be attending the sixth session in New York to participate in those discussions as an observer. 

In previous discussions, we addressed concerns over ambiguous surveillance powers and inadequate safeguards. Now we will delve into the heavily debated chapter on international cooperation. For clarity and depth, our analysis of this chapter will span two posts. This first post covers the historical context of international cooperation mechanisms, the zero draft's approach in Chapter V, the scope of cooperation, and protection of personal data. In the next post, we'll continue our analysis of Chapter V of the zero draft, addressing the broad demands of mutual legal assistance, the pitfalls of unchecked and lawless data-sharing, and the challenges of rapid-response mechanisms and jurisdiction for human rights.

What’s going to happen in New York?

EFF holds observer status in these talks, a notable advancement in transparency compared to other treaty negotiations. This participation affords us significant access to main discussions and numerous opportunities to interact with delegations. Yet, asserting influence remains a challenge. With each country having one vote, achieving consensus among more than 140 countries proves daunting. As observers, we also can voice our concerns weekly in front of all Member States. However, particularly contentious topics are moved to "informals" —sessions exclusive to Member States, excluding us and other multi-stakeholders. Though delegations have the option to consult with us outside these sessions, the real-time exclusion remains a serious concern.

Member States aim for total consensus on the draft convention's text. If they don't reach it, the matter could go to a vote where two-thirds of governments have to reach an agreement for a treaty to be adopted. A consensus would demonstrate broader support for the convention's implementation. It remains uncertain whether a final agreement will be finalized or could be reached by January's end or if discussions will extend beyond that. A timeline of the proposed convention can be found here.

Historical context: A look at international cooperation mechanisms

Historically, Mutual Legal Assistance Treaties (MLATs) have served as the backbone for cross-border criminal investigations. This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. As we have repeatedly said, the MLAT system encourages international cooperation. It also advances privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment’s warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the privacy rules where the data is stored, which may include important “necessary and proportionate” standards. Often bilateral, MLATs typically have faced criticism for their prolonged data request response times. Such delays usually stem from one nation’s lack of familiarity with another’s foreign data access laws. Some nations might not have MLATs, with reluctance often rooted in concerns about inadequate human rights protections. While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining. Now, nations are looking to put new rules for MLAT in the upcoming UN convention. Such rules will mostly impact those nations that do not have yet an MLA agreement.

Some states have embraced other international agreements. The Council of Europe’s Second Additional Protocol to the Budapest Convention, which opened for signatures in 2022, offers streamlined cross-border investigative tools at the cost of weakening human rights and safeguards. The Protocol’s tendency to sidestep traditional legal safeguards has drawn our criticism. We have many concerns with the Protocol (read here, here, here, here, including our proposed amendments), particularly how it lets any competent authority—including the police themselves—directly request subscriber information from a foreign company, bypassing any involvement of the other country’s government (or, for the most part, its legal system, which provides various conditions and safeguards). We have classified the Protocol’s problems for undermining rights and safeguards through the direct cooperation mechanism; its flawed understanding of subscriber information, and imbalance between mandatory law enforcement powers and dispensable or optional human rights safeguards; and finally weaker data protection safeguards compared to other settled international standards. One major concern we have raised within this debate is that, when seeking to harmonize safeguards, there is a race to the bottom of human rights protections. As the UN Security Council's Counter-Terrorism Committee Executive Directorate (CTED) recently noted:

"Agreeing on a common standard across States will almost certainly lead to a lower standard than one that would be achieved by identifying a high universal standard and asking States to ‘level up.’ The concern is that, in order to address law enforcement’s jurisdictional problems, the substantive law will become weakened, giving law enforcement too-quick access with too-little due process. The trend towards universalization, in other words, could lead to a lowest common denominator in terms of due process."

Other countries have implemented local laws—such as the Cloud Act in the U.S. and E-evidence in the European Union––each with their own human rights problems (read here and here). Both expanded American and European law enforcement’s ability, respectively, to target and access people’s data across international borders. 

Another more forceful approach that some countries have adopted on law enforcement cross-border access to data requires certain service providers to be under the physical local jurisdiction of countries where they have a substantial number of users. While taking jurisdiction based on the significant presence of users could be seen as justified in other contexts, it becomes controversial when companies are forced to comply with an arbitrary or disproportionate data demand, and even get penalized for resisting jurisdiction on human rights grounds, and when international rules for cross-border access already exist. Many laws have been passed requiring companies to “take all necessary measures” to accept local jurisdiction and comply with local laws that are inconsistent with international human rights law (read here about TurkeyIndia, Indonesia, and many others). This can involve forcing providers to physically store data about a nation’s residents within that country or to open a local office, making it easier for local authorities to access data or pressure staffers into complying with arbitrary or disproportionate requests. Draconian penalties can be enforced for noncompliance, with online services potentially interfered with or banned entirely if companies don’t obey. EFF reproaches these draconian measures on human rights grounds, and when companies are forcibly subjected to jurisdiction, and therefore, to comply with all these draconian laws, without the avenues to challenge it on human rights grounds.

Civil society has been warning that existing international law enforcement cooperation mechanisms are being abused or twisted to allow political repression even beyond forceful data localization mandates that seek to bypass international cooperation rules. INTERPOL, for instance, is an intergovernmental organization of 193 countries that facilitates worldwide police cooperation. But Human Rights Watch has documented numerous allegations of how China, Bahrain, and other countries have abused INTERPOL’s Red Notice system, an international “most wanted” list, to locate peaceful critics of government policies for minor offenses —but really,  for political gain

The zero draft’s approach to international cooperation (Chapter 5)

The zero draft also includes a whole chapter on international cooperation in law enforcement investigations. Countries that accede to the convention would promise to empower their own law enforcement in new ways, but also to allow new kinds of cooperation with foreign government agencies, with shockingly little responsibility to ensure that such cooperation isn’t abused. 

Now this draft convention lays groundwork for police cooperation between any two countries, for sharing or collecting evidence with few checks and balances, without any requirement for human rights review or independent oversight. Private data about dissidents could be turned over to brutal regimes simply because they allege those dissidents are cybercriminals.

Scope of international cooperation (Chapter I, Article 5,  Chapter II, Articles 17, Chapter IV, Article 24, Chapter V, Article 35)
One might think this convention applies only to investigations of specific cybercrimes, based on the list of offenses at the beginning (Articles 6-16). But then Article 35, in the background rules for international cooperation, opens the door to other crimes, including (via Article 17) those covered by any other international treaty; this includes crimes that already exist (like drug trafficking or trade agreements) as well as those that could become “applicable” in the future. 

Limiting this to “serious crime,” as the draft does, isn’t enough. This convention’s powers should apply only to the serious offenses in Article 6 to 16 of the convention: core cybercrimes that target computers and communication systems. 

This convention’s scope of international cooperation should focus only on specific and targeted criminal investigations and proceedings. Some of the language on the scope of criminal procedural measures and international cooperation was taken from the Budapest Convention, but one important word—”specific”—was somehow dropped.  Without it, the draft convention fails to prevent states from authorizing mass surveillance or fishing expeditions. And while these practices also should be prevented by proportionality principle under Article 24’s conditions and safeguards, the zero draft has removed that article’s application to the international cooperation chapter.

Mandatory dual criminality must be the rule for cross-border cooperation (Chapter V, Article 35)
Dual criminality—the principle that an act regarded as a crime is deemed illegal in both cooperating countries—should be a cornerstone, yet Article 35 currently treats the dual criminality rule as optional. This rule not only safeguards free expression and dissent but also prevents countries from imposing their laws universally. We strongly advocate for making dual criminality a mandatory provision. Free, democratic nations must demand this so that they aren’t forced to adopt other, repressive nations’ definitions of crime, particularly in cases where blasphemy or criticizing public figures are deemed crimes—definitions inconsistent with human rights law.

While democratic nations may trust their own commitment to human rights and their own enforcement of the dual criminality principle, it's essential to reflect on the broader repercussions. If the draft is accepted in its present form without stringent safeguards and a defined scope, it could provide a legal foundation for international collaboration in the prosecution and investigation of content-related crimes, and other crimes that are inconsistent with international human rights law. This could unintentionally strengthen authoritarian regimes, giving them tools for transnational repression when silent dissent under the guise of lesse majeste or criminal defamation. While Articles 5 and 24 offer certain protections, their wording must be refined further to limit the draft convention's scope to crimes consistent with international human rights law, and to ensure its consistent application in international cooperation.

Protection of personal data (Article 36 (1))
Article 36(1) describes conditions under which governments may transfer personal data as part of international cooperation on investigations. The current wording requires governments to comply with their own domestic law and more generally with “applicable international law.” Since this topic sometimes is wrongly addressed more permissively in trade law, we urge including a more precise reference to “international human rights law.” Such a change would underscore the need for human rights-based data protection standards. 

Also, we support Article19’s suggestion to strike the word “applicable,” as international human rights standards are universal, binding, and not subjective. In our joint submission with Privacy International, we propose an amendment to Article 36(1) to integrate minimum human rights-based data protection standards, such as the principles of lawful and fair processing, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. Data protection principles rooted in existing international human rights law have gained acknowledgment in the Human Rights Committee General Comment on Article 17 of ICCPR, and the report of the UN High Commissioner for Human Rights on the right to privacy in the digital age. Consequent resolutions by the General Assembly on the right to privacy in the digital age have advocated for data protection legislation aligned with international human rights law.


The proposed UN Cybercrime Convention’s zero draft raises too many alarm bells. While its intent––to foster international cooperation against cybercrime––is seemingly noble, its implications could be catastrophic. The draft in its current state offers vast opportunities for misuse, from political repression to sidestepping legal safeguards. The debates in New York City this month aren't just procedural––they will determine if the convention serves as a tool for true justice or becomes a weapon against it. 

Rigorous scrutiny and oversight of the process by civil society and other stakeholders, and Member States’ unwavering commitment to human rights, are both essential. This is a legally binding treaty: It can compel the reform of national laws all around the world, encouraging expansive and lawless surveillance powers and a race to the bottom of privacy protection. We should fight back while we still can.

Katitza Rodriguez

Congress Amended KOSA, But It's Still A Censorship Bill

1 month 1 week ago

A key Senate committee voted to move forward one of the most dangerous bills we’ve seen in years: the Kids Online Safety Act (KOSA). EFF has opposed the Kids Online Safety Act, S. 1409, because it’s a danger to the rights of all users, both minors and adults. The bill requires all websites, apps, and online platforms to filter and block legal speech. It empowers state attorney generals, who are mostly elected politicians, to file lawsuits based on content they believe will be harmful to young people. 

These fundamental flaws remain in the bill, and EFF and many others continue to oppose it. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 


the "kids online safety act" isn't safe for kids or adults

Before the Senate Commerce Committee voted to move forward the bill on July 27, it incorporated a number of amendments. While none of them change the fundamental problems with KOSA, or our opposition to the bill, we analyze them here. 

The Bill’s Knowledge Standard Has Changed

The first change to the bill is that the knowledge standard has been tightened, so that websites and apps can only be held liable if they actually know there’s a young person using their service. The previous version of the bill regulated any online platform that was used by minors, or was “reasonably likely to be used” by a minor. 

The previous version applied to a huge swath of the internet, since the view of what sites are “reasonably likely to be used” by a minor would be up to attorney generals. Other than sites that took big steps, like requiring age verification, almost any site could be “reasonably likely” to be used by a minor. 

Requiring actual knowledge of minors is an improvement, but the protective effect is small. A site that was told, for instance, that a certain proportion of its users were minors—even if those minors were lying to get access—could be sued by the state. The site might be held liable even if there was one minor user they knew about, perhaps one they’d repeatedly kicked off. 

The bill still effectively regulates the entire internet that isn’t age-gated. KOSA is fundamentally a censorship bill, so we’re concerned about its effects on any website or service—whether they’re meant to serve solely adults, solely kids, or both. 

Pushing A Chronological Feed Won’t Help 

Another significant change to the bill is a longer amendment from Sen. John Thune (R-SD), who railed against “filter bubbles” during the markup hearing. Thune’s amendment requires larger platforms to provide an algorithm that doesn’t use any user data whatsoever. The amendment would prevent websites and apps from using even basic information, like what city a person lives in, to decide what kind of information to prioritize. 

The Thune amendment is meant to push users towards a chronological feed, which Sen. Thune called during the hearing a “normal chronological feed.” There’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers.  

There’s also no evidence that chronological feeds make for better or healthier content consumption. A recently published major study on Facebook data specifically studied the effects of a chronological feed, and found that a chronological feed “increased the share of content from designated untrustworthy sources by more than two-thirds relative to the Algorithmic Feed.” 

KOSA Could Be Replaced With An Actually Good Bill On Targeted Ads

A small part of KOSA deals with targeted advertising. It would require disclosures about things like “why the minor is being targeted with a particular advertisement.” 

This part of the bill is actually a positive step—protecting users’ privacy, rather than imposing censorship on the content they can access. But as the only privacy-protective part of the bill, it’s pathetically small. 

At this point in the internet’s history, we need more than mild disclosure requirements and more studies about behavioral ads. They should be banned altogether. And there’s no reason to limit the ban to minors; behavioral ads are tracking the mouse clicks and browsing history of users of all ages. 

A bill that worked to protect internet users by limiting tracking and protecting privacy would be great. That’s not KOSA, which barely even gestures at privacy protections, while offering politicians and police a comprehensive suite of censorship options. 

Other KOSA Amendments Are Minor

  • The amendment marked Lummis 1 specifically exempts a VPN. 
  • The Lummis 2 amendment slightly expands a required government study to look at effects on small businesses. 
  • The Cruz 1 amendment specifies that a wireless messaging exemption includes SMS and MMS messages. 
  • The Cruz 2 amendment changes the word “gender” to “sex.” 
  • The Lujan 1 amendment changes a part of the bill that allows “geolocation” data to be used in certain ways to only allow the use of city-level data. 
  • The Lujan 2 amendment expands the government study portion of the bill to include non-English language users. 

Overall, these small changes to a flawed bill don’t change the basic fact that KOSA is a censorship bill that will harm the rights of both adult and minor users. We oppose it, and urge you to contact your congressperson about it today. 


TELL CONGRESS to oppose kosa

Joe Mullin
36 minutes 49 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed