🤫 Meta's Secret Spying Scheme | EFFector 37.7

1 day 8 hours ago

Keeping up on the latest digital rights news has never been easier. With a new look, EFF's EFFector newsletter covers the latest details on our work defending your rights to privacy and free expression online.

EFFector 37.7 covers some of the very sneaky tactics that Meta has been using to track you online, and how you can mitigate some of this tracking. In this issue, we're also explaining the legal processes police use to obtain your private online data, and providing an update on the NO FAKES Act—a U.S. Senate bill that takes a flawed approach to concerns about AI-generated "replicas." 

And, in case you missed it in the previous newsletter, we're debuting a new audio companion to EFFector as well! This time, Lena Cohen breaks down the ways that Meta tracks you online and what you—and lawmakers—can do to prevent that tracking. You can listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.7 - META'S SECRET SPYING SCHEME

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Cryptography Makes a Post-Quantum Leap

1 day 18 hours ago

The cryptography that protects our privacy and security online relies on the fact that even the strongest computers will take essentially forever to do certain tasks, like factoring prime numbers and finding discrete logarithms which are important for RSA encryption, Diffie-Hellman key exchanges, and elliptic curve encryption. But what happens when those problems – and the cryptography they underpin – are no longer infeasible for computers to solve? Will our online defenses collapse? 

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fcf786418-1f0e-452e-8026-ef1a38c77f4e%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Not if Deirdre Connolly can help it. As a cutting-edge thinker in post-quantum cryptography, Connolly is making sure that the next giant leap forward in computing – quantum machines that use principles of subatomic mechanics to ignore some constraints of classical mathematics and solve complex problems much faster – don’t reduce our digital walls to rubble. Connolly joins EFF’s Cindy Cohn and Jason Kelley to discuss not only how post-quantum cryptography can shore up those existing walls but also help us find entirely new methods of protecting our information. 

In this episode you’ll learn about: 

  • Why we’re not yet sure exactly what quantum computing can do yet, and that’s exactly why we need to think about post-quantum cryptography now 
  • What a “Harvest Now, Decrypt Later” attack is, and what’s happening today to defend against it
  • How cryptographic collaboration, competition, and community are key to exploring a variety of paths to post-quantum resilience
  • Why preparing for post-quantum cryptography is and isn’t like fixing the Y2K bug
  • How the best impact that end users can hope for from post-quantum cryptography is no visible impact at all
  • Don’t worry—you won’t have to know, or learn, any math for this episode!  

Deidre Connolly is a research and applied cryptographer at Sandbox AQ with particular expertise in post-quantum encryption. She also co-hosts the “Security Cryptography Whatever” podcast about modern computer security and cryptography, with a focus on engineering and real-world experiences. Earlier, she was an engineer at the Zcash Foundation – a nonprofit that builds financial privacy infrastructure for the public good – as well as at Brightcove, Akamai, and HubSpot

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

DEIRDRE CONNOLLY: I only got into cryptography and especially post quantum quickly after that. further into my professional life. I was a software engineer for a whil,e and the Snowden leaks happened, and phone records get leaked. All of Verizon's phone records get leaked. and then Prism and more leaks and more leaks. And as an engineer first, I felt like everything that I was building and we were building and telling people to use was vulnerable.
I wanted to learn more about how to do things securely. I went further and further and further down the rabbit hole of cryptography. And then, I think I saw a talk which was basically like, oh, elliptic curves are vulnerable to a quantum attack. And I was like, well, I, I really like these things. They're very elegant mathematical objects, it's very beautiful. I was sad that they were fundamentally broken, and, I think it was, Dan Bernstein who was like, well, there's this new thing that uses elliptic curves, but is supposed to be post quantum secure.
But the math is very difficult and no one understands it. I was like, well, I want to understand it if it preserves my beautiful elliptic curves. That's how I just went, just running, screaming downhill into post quantum cryptography.

CINDY COHN: That's Deirdre Connolly talking about how her love of beautiful math and her anger at the Snowden revelations about how the government was undermining security, led her to the world of post-quantum cryptography.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. You're listening to How to Fix the Internet.

CINDY COHN: On this show we talk to tech leaders, policy-makers, thinkers, artists and engineers about what the future could look like if we get things right online.

JASON KELLEY: Our guest today is at the forefront of the future of digital security. And just a heads up that this is one of the more technical episodes that we've recorded -- you'll hear quite a bit of cryptography jargon, so we've written up some of the terms that come up in the show notes, so take a look there if you hear a term you don't recognize.

CINDY COHN: Deidre Connolly is a research engineer and applied cryptographer at Sandbox AQ, with a particular expertise in post-quantum encryption. She also co-hosts the Security, Cryptography, Whatever podcast, so she's something of a cryptography influencer too. When we asked our tech team here at EFF who we should be speaking with on this episode about quantum cryptography and quantum computers more generally, everyone agreed that Deirdre was the person. So we're very glad to have you here. Welcome, Deirdre.

DEIRDRE CONNOLLY: Thank you very much for having me. Hi.

CINDY COHN: Now we obviously work with a lot of technologists here and, and certainly personally cryptography is near and dear to my heart, but we are not technologists, neither Jason nor I. So can you just give us a baseline of what post-quantum cryptography is and why people are talking about it?

DEIRDRE CONNOLLY: Sure. So a lot of the cryptography that we have deployed in the real world relies on a lot of math and security assumptions on that math based on things like abstract groups, Diffie-Hellman, elliptic curves, finite fields, and factoring prime numbers such as, uh, systems like RSA.
All of these, constructions and problems, mathematical problems, have served us very well in the last 40-ish years of cryptography. They've let us build very useful, efficient, small cryptography that we've deployed in the real world. It turns out that they are all also vulnerable in the same way to advanced cryptographic attacks that are only possible and only efficient when run on a quantum computer, and this is a class of computation, a whole new class of computation versus digital computers, which is the main computing paradigm that we've been used to for the last 75 years plus.
Quantum computers allow these new classes of attacks, especially, variants of Shore's algorithm – named Dr. Peter Shore – that basically when run on a sufficiently large, cryptographically relevant quantum computer, makes all of the asymmetric cryptography based on these problems that we've deployed very, very vulnerable.
So post-quantum cryptography is trying to take that class of attack into consideration and building cryptography to both replace what we've already deployed and make it resilient to this kind of attack, and trying to see what else we can do with these fundamentally different mathematical and cryptographic assumptions when building cryptography.

CINDY COHN: So we've kind of, we've secured our stuff behind a whole lot of walls, and we're slowly building a bulldozer. This is a particular piece of the world where the speed at which computers can do things has been part of our protection, and so we have to rethink that.

DEIRDRE CONNOLLY: Yeah, quantum computing is a fundamentally new paradigm of how we process data that promises to have very interesting, uh, and like, applications beyond what we can envision right now. Like things like protein folding, chemical analysis, nuclear simulation, and cryptanalysts, or very strong attacks against cryptography.
But it is a field where it's such a fundamentally new computational paradigm that we don't even know what its applications fully would be yet, because like we didn't fully know what we were doing with digital computers in the forties and fifties. Like they were big calculators at one time.

JASON KELLEY: When it was suggested that we talk to you about this. I admit that I have not heard much about this field, and I realized quickly when looking into it that there's sort of a ton of hype around quantum computing and post-quantum cryptography and that kind of hype can make it hard to know whether or not something is like actually going to be a big thing or, whether this is something that's becoming like an investment cycle, like a lot of things do. And one of the things that quickly came up as an actual, like real danger is what's called sort of “save now decrypt later.”

DEIRDRE CONNOLLY: Oh yeah.

JASON KELLEY: Right? We have all these messages, for example, that have been encrypted with current encryption methods. And if someone holds onto those, they can decrypt them using quantum computers in the future. How serious is that danger?

DEIRDRE CONNOLLY: It’s definitely a concern and it's the number one driver I would say to post-quantum crypto adoption in broad industry right now is mitigating the threat of a Store Now/Decrypt Later attack, also known as Harvest Now/Decrypt Later, a bunch of names that all mean the same thing.
And fundamentally, it's, uh, especially if you're doing any kind of key agreement over a public channel, and doing key agreement over a public channel is part of the whole purpose of like, you want to be able to talk to someone who you've never really, touched base with before, and you all kind of know, some public parameters that even your adversary knows and based on just the fact that you can send messages to each other and some public parameters, and some secret values that only you know, and only the other party knows you can establish a shared secret, and then you can start encrypting traffic between you to communicate. And this is what you do in your web browser when you have an HTTPS connection, that's over TLS.
This is what you do with Signal or WhatsApp or any, or, you know, Facebook Messenger with the encrypted communications. They're using Diffie-Helman as part of the protocol to set up a shared secret, and then you use that to encrypt their message bodies that you're sending back and forth between you.
But if you can just store all those communications over that public channel, and the adversary knows the public parameters 'cause they're freely published, that's part of Kerckhoff’s Principle about good cryptography - the only thing that the adversary shouldn't know about your crypto system is the secret key values that you're actually using. It should be secure against an adversary that knows everything that you know, except the secret key material.
And you can just record all those public messages and all the public key exchange messages, and you just store them in a big database somewhere. And then when you have your large cryptographically relevant quantum computer, you can rifle through your files and say, hmm, let's point it at this.
And that's the threat that's live now to the stuff that we have already deployed and the stuff that we're continuing to do communications on now that is protected by elliptic curve Diffie Hellman, or Finite Field Diffie Hellman, or RSA. They can just record that and just theoretically point an attack at it at a later date when that attack comes online.
So like in TLS, there's a lot of browsers and servers and infrastructure providers that have updated to post-quantum resilient solutions for TLS. So they're using a combination of the classic elliptic curve, Diffie Hellman and a post-quantum KEM, uh, called ML Kem that was standardized by the United States based on a public design that's been, you know, a multi international collaboration to help do this design.
I think that's been deployed in Chrome, and I think it's deployed by CloudFlare and it's getting deployed – I think it's now become the default option in the latest version of Open SSL. And a lot of other open source projects, so that's TLS similar, approaches are being adopted in open SSH, the most popular SSH implementation in the world. Signal, the service has updated their key exchange to also include a post quantum KEM and their updated key establishments. So when you start a new conversation with someone or reset a conversation with someone that is the latest version of Signal is now protected against that sort of attack.
That is definitely happening and it's happening the most rapidly because of that Store now/Decrypt later attack, which is considered live. Everything that we're doing now can just be recorded and then later when the attack comes online, they can attack us retroactively. So that's definitely a big driver of things changing in the wild right now.

JASON KELLEY: Okay. I'm going to throw out two parallels for my very limited knowledge to make sure I understand. This reminds me a little bit of sort of the work that had to be done before Y2K in, in the sense of like, now people think nothing went wrong and nothing was ever gonna go wrong, but all of us working anywhere near the field know actually it took a ton of work to make sure that nothing blew up or stopped working.
And the other is that in, I think it was 1998, EFF was involved in something we called Deep Crack, where we made, that's a, I'm realizing now that's a terrible name. But anyway, the DES cracker, um, we basically wanted to show that DES was capable of being cracked, right? And that this was a - correct me if I'm wrong - it was some sort of cryptographic standard that the government was using and people wanted to show that it wasn't sufficient.

DEIRDRE CONNOLLY: Yes - I think it was the first digital encryption standard. And then after its vulnerability was shown, they, they tripled it up to, to make it useful. And that's why Triple DES is still used in a lot of places and is actually considered okay. And then later came the advanced encryption standard, AES, which we prefer today.

JASON KELLEY: Okay, so we've learned the lesson, or we are learning the lesson, it sounds like.

DEIRDRE CONNOLLY: Uh huh.

CINDY COHN: Yeah, I think that that's, that's right. I mean, EFF built the DES cracker because in the nineties the government was insisting that something that everybody knew was really, really insecure and was going to only get worse as computers got stronger and, and strong computers got in more people's hands, um, to basically show that the emperor had no clothes, um, that this wasn't very good.
And I think with the NIST standards and what's happening with post-quantum is really, you know, the hopeful version is we learned that lesson and we're not seeing government trying to pretend like there isn't a risk in order to preserve old standards, but instead leading the way with new ones. Is that fair?

DEIRDRE CONNOLLY: That is very fair. NIST ran this post-quantum competition almost over 10 years, and it had over 80 submissions in the first round from all over the world, from industry, academia, and a mix of everything in between, and then it narrowed it down to. the three that are, they're not all out yet, but there's the key agreement, one called ML Kem, and three signatures. And there's a mix of cryptographic problems that they're based on, but there were multiple rounds, lots of feedback, lots of things got broken.
This competition has absolutely led the way for the world of getting ready for post-quantum cryptography. There are some competitions that have happened in Korea, and I think there's some work happening in China for their, you know, for their area.
There are other open standards and there are standards happening in other standards bodies, but the NIST competition has led the way, and it, because it's all open and all these standards are open and all of the work and the cryptanalysis that has gone in for the whole stretch. It's all been public and all these standards and drafts and analysis and attacks have been public. It's able to benefit everyone in the world.

CINDY COHN: I got started in the crypto wars in the nineties where the government was kind of the problem and they still are. And I do wanna ask you about whether you're seeing any role of the kinda national social security, FBI infrastructure, which has traditionally tried to put a thumb on the scales and make things less secure so that they could have access, if you're seeing any of that there.
But on the NIST side, I think this provides a nice counter example of how government can help facilitate building a better world sometimes, as opposed to being the thing we have to drag kicking and screaming into it.
But let me circle around to the question I embedded in that, which is, you know, one of the problems that that, that we know happened in the nineties around DES, and then of course some of the Snowden revelations indicated some mucking about in security as well behind the scenes by the NSA. Are you seeing anything like that and, and what should we be on the lookout for?

DEIRDRE CONNOLLY: Not in the PQC stuff. Uh, there, like there have been a lot of people that were paying very close attention to what these independent teams were proposing and then what was getting turned into a standard or a proposed standard and every little change, because I, I was closely following the key establishment stuff.
Um, every little change people were trying to be like, did you tweak? Why did you tweak that? Did, like, is there a good reason? And like, running down basically all of those things. And like including trying to get into the nitty gritty of like. Okay. We think this is approximately these many bits of security using these parameter and like talking about, I dunno, 123 versus 128 bits and like really paying attention to all of that stuff.
And I don't think there was any evidence of anything like that. And, and for, for plus or minus, because there were. I don't remember which crypto scheme it was, but it, there was definitely an improvement from, I think some of the folks at NSA very quietly back in the day to, I think it was the S boxes, and I don't remember if it was DES or AES or whatever it was.
But people didn't understand at the time because it was related to advanced, uh, I think it was a differential crypto analysis attacks that folks inside there knew about, and people in outside academia didn't quite know about yet. And then after the fact they were like, oh, they've made this better. Um, we're not, we're not even seeing any evidence of anything of that character either.
It's just sort of like, it's very open letting, like if everything's proceeding well and the products are going well of these post-quantum standards, like, you know, leave it alone. And so everything looks good. And like, especially for NSA, uh, national Security Systems in the, in the United States, they have updated their own targets to migrate to post-quantum, and they are relying fully on the highest security level of these new standards.
So like they are eating their own dog food. They're protecting the highest classified systems and saying these need to be fully migrated to fully post quantum key agreement. Uh, and I think signatures at different times, but there has to be by like 2035. So if they were doing anything to kind of twiddle with those standards, they'd be, you know, hurting themselves and shooting themselves in the foot.

CINDY COHN: Well fingers crossed.

DEIRDRE CONNOLLY: Yes.

CINDY COHN: Because I wanna build a better internet and a better. Internet means that they aren't secretly messing around with our security. And so this is, you know, cautiously good news.

JASON KELLEY: Let's take a quick moment to thank our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]

JASON KELLEY: And now, back to our conversation with Deirdre Connolly.

CINDY COHN: I think the thing that's fascinating about this is kind of seeing this cat and mouse game about the ability to break codes, and the ability to build codes and systems that are resistant to the breaking, kind of playing out here in the context of building better computers for everyone.
And I think it's really fascinating and I think it also for people I. You know, this is a pretty technical conversation, um, even, you know, uh, for our audience. But this is the stuff that goes on under the hood of how we keep journalists safe, how we keep activists safe, how we keep us all safe, whether it's our bank accounts or our, you know, people are talking about mobile IDs now and other, you know, all sorts of sensitive documents that are going to not be in physical form anymore, but are gonna be in digital form.
And unless we get this lock part right, we're really creating problems for people. And you know, what I really appreciate about you and the other people kind of in the midst of this fight is it's very unsung, right? It's kind of under the radar for the rest of us, but yet it's the, it's the ground that we need to stand on to, to be safe moving forward.

DEIRDRE CONNOLLY: Yeah, and there's a lot of assumptions, uh, that even the low level theoretical cryptographers and the people implementing their, their stuff into software and the stuff, the people trying to deploy, that there's a, a lot of assumptions that have been baked into what we've built that to a degree don't quite fit in some of the, the things we've been able to build in a post-quantum secure way, or the way we think it's a post-quantum secure way.
Um, we're gonna need to change some stuff and we think we know how to change some stuff to make it work. but we are hoping that we don't accidentally introduce any vulnerabilities or gaps.
We're trying, but also we're not a hundred percent sure that we're not missing something, 'cause these things are new. And so we're trying, and we're also trying to make sure we don't break things as we change them because we're trying to change them to be post quantum resilient. But you know, once you change something, if there's a possibility, you, you just didn't understand it completely. And you don't wanna break something that was working well in one direction because you wanna improve it in another direction.

CINDY COHN: And that's why I think it's important to continue to have a robust community of people who are the breakers, right? Who are, are hackers, who are, who are attacking. And that is a, you know, that's a mindset, right? That's a way of thinking about stuff that is important to protect and nurture, um, because, you know, there's an old quote from Bruce Schneider: Anyone can build a crypto system that they themselves cannot break. Right? It takes a community of people trying to really pound away at something to figure out where the holes are.
And you know, a lot of the work that EFF does around coders rights and other kinds of things is to make sure that there's space for that. and I think it's gonna be as needed in a quantum world as it was in a kind of classical computer world.

DEIRDRE CONNOLLY: Absolutely. I'm confident that we will learn a lot more from the breakers about this new cryptography because, like, we've tried to be robust through this, you know, NIST competition, and a lot of those, the things that we learn apply to other constructions as they come out. but like there's a whole area of people who are going to be encountering this kind of newish cryptography for the first time, and they kind of look at it and they're like. Oh, uh, I, I think I might be able to do something interesting with this, and we're, we'll all learn more and we'll try to patch and update as quickly as possible

JASON KELLEY: And this is why we have competitions to figure out what the best options are and why some people might favor one algorithm over another for different, different processes and things like that.

DEIDRE CONNOLLY: And that's why we're probably gonna have a lot of different flavors of post-quantum cryptography getting deployed in the world because it's not just, ah, you know, I don't love NIST. I'm gonna do my own thing in my own country over here. Or, or have different requirements. There is that at play, but also you're trying to not put all your eggs in one basket as well.

CINDY COHN: Yeah, so we want a menu of things so that people can really pick, from, you know, vetted, but different strategies. So I wanna ask the kind of core question for the podcast, which is, um, what does it look like if we get this right, if we get quantum computing and, you know, post-quantum crypto, right?
How does the world look different? Or does it just look the same? How, what, what does it look like if we do this well?

DEIRDRE CONNOLLY: Hopefully to a person just using their phone or using their computer to talk to somebody on the other side of the world, hopefully they don't notice. Hopefully to them, if they're, you know, deploying a website and they're like, ah, I need to get a Let’s Encrypt certificate or whatever.
Hopefully Let's Encrypt just, you know, insert bot just kind of does everything right by default and they don't have to worry about it.
Um, for the builders, it should be, we have a good recommended menu of cryptography that you can use when you're deploying TLS, when you're deploying SSH, uh, when you're building cryptographic applications, especially.
So like if you are building something in Go or Java or you know, whatever it might be, the crypto library in your language will have the updated recommended signature algorithm or key agreement algorithm and be, like, this is how we, you know, they have code snippets to say like, this is how you should use it, and they will deprecate the older stuff.
And, like, unfortunately there's gonna be a long time where there's gonna be a mix of the new post-quantum stuff that we know how to use and know how to deploy and protect. The most important, you know, stuff like to mitigate Store now/Decrypt later and, you know, get those signatures with the most important, uh, protected stuff.
Uh, get those done. But there's a lot of stuff that we're not really clear about. How we wanna do it yet, and kind of going back to one of the things you mentioned earlier, uh, comparing this to Y2K, there was a lot of work that went into mitigating Y2K before, during, immediately after.
Unfortunately, the comparison to the post quantum migration kind of falls down because after Y2K, if you hadn't fixed something, it would break. And you would notice in usually an obvious way, and then you could go find it. You, you fix the most important stuff that, you know, if it broke, like you would lose billions of dollars or, you know, whatever. You'd have an outage.
For cryptography, especially the stuff that's a little bit fancier. Um, you might not know it's broken because the adversary is not gonna, it's not gonna blow up.
And you have to, you know, reboot a server or patch something and then, you know, redeploy. If it's gonna fail, it's gonna fail quietly. And so we're trying to kind of find these things, or at least make the kind of longer tail of stuff, uh, find fixes for that upfront, you know, so that at least the option is available.
But for a regular person, hopefully they shouldn't notice. So everyone's trying really hard to make it so that the best security, in terms of the cryptography is deployed with, without downgrading your experience. We're gonna keep trying to do that.
I don't wanna build crap and say “Go use it.” I want you to be able to just go about your life and use a tool that's supposed to be useful and helpful. And it's not accidentally leaking all your data to some third party service or just leaving a hole on your network for any, any actor who notices to walk through and you know, all that sort of stuff.
So whether it's like implementing things securely in software, or it's cryptography or you know, post-quantum weirdness, like for me, I just wanna build good stuff for people, that's not crap.

JASON KELLEY: Everyone listening to this agrees with you. We don't want to build crap. We want to build some beautiful things. Let's go out there and do it.

DEIRDRE CONNOLLY: Cool.

JASON KELLEY: Thank you so much, Deirdre.

DEIRDRE CONNOLLY: Thank you!

CINDY COHN: Thank you Deirdre. We really appreciate you coming and explaining all of this to, you know, uh, the lawyer and activist at EFF.

JASON KELLEY: Well, I think that was probably the most technical conversation we've had, but I followed along pretty well and I feel like at first I was very nervous based on the, save and decrypt concerns. But after we talked to Deirdre, I feel like the people working on this. Just like for Y2K are pretty much gonna keep us out of hot water. And I learned a lot more than I did know before we started the conversation. What about you, Cindy?

CINDY COHN: I learned a lot as well. I mean, cryptography and, attacks on security is always, you know, it's a process, and it's a process by which we do the best we can, and then, then we also do the best we can to rip it apart and find all the holes, and then we, we iterate forward. And it's nice to hear that that model is still the model, even as we get into something like quantum computers, which, um, frankly are still hard to conceptualize.
But I agree. I think that what the good news outta this interview is I feel like there's a lot of pieces in place to try to do this right, to have this tremendous shift in computing that we don't know when it's coming, but I think that the research indicates that it SI coming, be something that we can handle, um, rather than something that overwhelms us.
And I think that's really,it's good to hear that good people are trying to do the right thing here since it's not inevitable.

JASON KELLEY: Yeah, and it is nice when someone's sort of best vision for what the future looks like is hopefully your life. You will have no impacts from this because everything will be taken care of. That's always good.
I mean, it sounds like, you know, the main thing for EFF is, as you said, we have to make sure that security engineers, hackers have the resources that they need to protect us from these kinds of threats and, and other kinds of threats obviously.
But, you know, that's part of EFF's job, like you mentioned. Our job is to make sure that there are people able to do this work and be protected while doing it so that when the. Solutions do come about. You know, they work and they're implemented and the average person doesn't have to know anything and isn't vulnerable.

CINDY COHN: Yeah, I also think that, um, I appreciated her vision that this is a, you know, the future's gonna be not just one. Size fits all solution, but a menu of things that take into account, you know, both what works better in terms of, you know, bandwidth and compute time, but also what you know, what people actually need.
And I think that's a piece that's kind of built into the way that this is happening that's also really hopeful. In the past and, and I was around when EFF built the DES cracker, um, you know, we had a government that was saying, you know, you know, everything's fine, everything's fine when everybody knew that things weren't fine.
So it's also really hopeful that that's not the position that NIST is taking now, and that's not the position that people who may not even pick the NIST standards but pick other standards are really thinking through.

JASON KELLEY: Yeah, it's very helpful and positive and nice to hear when something has improved for the better. Right? And that's what happened here. We had this, this different attitude from, you know, government at large in the past and it's changed and that's partly thanks to EFF, which is amazing.

CINDY COHN: Yeah, I think that's right. And, um, you know, we'll see going forward, you know, the governments change and they go through different things, but this is, this is a hopeful moment and we're gonna push on through to this future.
I think there's a lot of, you know, there's a lot of worry about quantum computers and what they're gonna do in the world, and it's nice to have a little vision of, not only can we get it right, but there are forces in place that are getting it right. And of course it does my heart so, so good that, you know, someone like Deirdre was inspired by Snowden and dove deep and figured out how to be one of the people who was building the better world. We've talked to so many people like that, and this is a particular, you know, little geeky corner of the world. But, you know, those are our people and that makes me really happy.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

Josh Richman

EFFecting Change: EFF Turns 35!

2 days 3 hours ago

We're wishing EFF a happy birthday on July 10! Since 1990, EFF's lawyers, activists, analysts, and technologists have used everything in their toolkit to ensure that technology supports freedom, justice, and innovation for all people of the world. They've seen it all and in this special edition of our EFFecting Change livestream series, leading experts at EFF will explore what's next for technology users.

EFFecting Change Livestream Series:
EFF Turns 35!
Thursday, July 10th
11:00 AM - 12:00 PM Pacific - Check Local Time
This event is LIVE and FREE!


Join EFF Executive Director Cindy Cohn, EFF Legislative Director Lee Tien, EFF Director of Cybersecurity Eva Galperin, and Professor / EFF Board Member Yoshi Kohno for this live Q&A. Learn what they have seen and how we can fuel the fight for privacy, free expression, and a future where digital freedoms are protected for everyone. 

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series:eff.org/ECUpdates.

Aaron Jue

Flock Safety’s Feature Updates Cannot Make Automated License Plate Readers Safe

6 days 1 hour ago

Two recent statements from the surveillance company—one addressing Illinois privacy violations and another defending the company's national surveillance network—reveal a troubling pattern: when confronted by evidence of widespread abuse, Flock Safety has blamed users, downplayed harms, and doubled down on the very systems that enabled the violations in the first place.

Flock's aggressive public relations campaign to salvage its reputation comes as no surprise. Last month, we described how investigative reporting from 404 Media revealed that a sheriff's office in Texas searched data from more than 83,000 automated license plate reader (ALPR) cameras to track down a woman suspected of self-managing an abortion. (A scenario that may have been avoided, it's worth noting, had Flock taken action when they were first warned about this threat three years ago).

Flock calls the reporting on the Texas sheriff's office "purposefully misleading," claiming the woman was searched for as a missing person at her family's request rather than for her abortion. But that ignores the core issue: this officer used a nationwide surveillance dragnet (again: over 83,000 cameras) to track someone down, and used her suspected healthcare decisions as a reason to do so. Framing this as concern for her safety plays directly into anti-abortion narratives that depict abortion as dangerous and traumatic in order to justify increased policing, criminalization, control—and, ultimately, surveillance.

Flock Safety has blamed users, downplayed harms, and doubled down on the very systems that enabled the violations in the first place.

As if that weren't enough, the company has also come under fire for how its ALPR network data is being actively used to assist in mass deportation. Despite U.S. Immigration and Customs Enforcement (ICE) having no formal agreement with Flock Safety, public records revealed "more than 4,000 nation and statewide lookups by local and state police done either at the behest of the federal government or as an 'informal' favor to federal law enforcement, or with a potential immigration focus." The network audit data analyzed by 404 exposed an informal data-sharing environment that creates an end-run around oversight and accountability measures: federal agencies can access the surveillance network through local partnerships without the transparency and legal constraints that would apply to direct federal contracts.

Flock Safety is adamant this is "not Flock's decision," and by implication, not their fault. Instead, the responsibility lies with each individual local law enforcement agency. In the same breath, they insist that data sharing is essential, loudly claiming credit when the technology is involved in cross-jurisdictional investigations—but failing to show the same attitude when that data-sharing ecosystem is used to terrorize abortion seekers or immigrants. 

Flock Safety: The Surveillance Social Network

In growing from a 2017 startup to a $7.5 billion company "serving over 5,000 communities," Flock allowed individual agencies wide berth to set and regulate their own policies. In effect, this approach offered cheap surveillance technology with minimal restrictions, leaving major decisions and actions in the hands of law enforcement while the company scaled rapidly.

And they have no intention of slowing down. Just this week, Flock launched its Business Network, facilitating unregulated data sharing amongst its private sector security clients. "For years, our law enforcement customers have used the power of a shared network to identify threats, connect cases, and reduce crime. Now, we're extending that same network effect to the private sector," Flock Safety's CEO announced

Flock Safety wooing law enforcement officers at the 2023 International Chiefs of Police Conference.

The company is building out a new mass surveillance network using the exact template that ended with the company having to retrain thousands of officers in Illinois on how not to break state law—the same template that made it easy for officers to do so in the first place. Flock's continued integration of disparate surveillance networks across the public and private spheres—despite the harms that have already occurred—is owed in part to the one thing that it's gotten really good at over the past couple of years: facilitating a surveillance social network. 

Employing marketing phrases like "collaboration" and "force multiplier," Flock encourages as much sharing as possible, going as far as to claim that network effects can significantly improve case closure rates. They cultivate a sense of shared community and purpose among users so they opt into good faith sharing relationships with other law enforcement agencies across the country. But it's precisely that social layer that creates uncontrollable risk.

The possibility of human workarounds at every level undermines any technical safeguards Flock may claim. Search term blocking relies on officers accurately labeling search intent—a system easily defeated by entering vague reasons like "investigation" or incorrect justifications, made either intentionally or not. And, of course, words like "investigation" or "missing person" can mean virtually anything, offering no value to meaningful oversight of how and for what the system is being used. Moving forward, sheriff's offices looking to avoid negative press can surveil abortion seekers or immigrants with ease, so long as they use vague and unsuspecting reasons. 

The same can be said for case number requirements, which depend on manual entry. This can easily be circumvented by reusing legitimate case numbers for unauthorized searches. Audit logs only track inputs, not contextual legitimacy. Flock's proposed AI-driven audit alerts, something that may be able to flag suspicious activity after searches (and harm) have already occurred, relies on local agencies to self-monitor misuse—despite their demonstrated inability to do so.

Flock operates as a single point of failure that can compromise—and has compromised—the privacy of millions of Americans simultaneously.

And, of course, even the most restrictive department policy may not be enough. Austin, Texas, had implemented one of the most restrictive ALPR programs in the country, and the program still failed: the city's own audit revealed systematic compliance failures that rendered its guardrails meaningless. The company's continued appeal to "local policies" means nothing when Flock's data-sharing network does not account for how law enforcement policies, regulations, and accountability vary by jurisdiction. You may have a good relationship with your local police, who solicit your input on what their policy looks like; you don't have that same relationship with hundreds or thousands of other agencies with whom they share their data. So if an officer on the other side of the country violates your privacy, it’d be difficult to hold them accountable. 

ALPR surveillance systems are inherently vulnerable to both technical exploitation and human manipulation. These vulnerabilities are not theoretical—they represent real pathways for bad actors to access vast databases containing millions of Americans' location data. When surveillance databases are breached, the consequences extend far beyond typical data theft—this information can be used to harass, stalk, or even extort. The intimate details of people's daily routines, their associations, and their political activities may become available to anyone with malicious intent. Flock operates as a single point of failure that can compromise—and has compromised—the privacy of millions of Americans simultaneously.

Don't Stop de-Flocking

Rather than addressing legitimate concerns about privacy, security, and constitutional rights, Flock has only promised updates that fall short of meaningful reforms. These software tweaks and feature rollouts cannot assuage the fear engendered by the massive surveillance system it has built and continues to expand.

A typical specimen of Flock Safety's automated license plate readers.

Flock's insistence that what's happening with abortion criminalization and immigration enforcement has nothing to do with them—that these are just red-state problems or the fault of rogue officers—is concerning. Flock designed the network that is being used, and the public should hold them accountable for failing to build in protections from abuse that cannot be easily circumvented.

Thankfully, that's exactly what's happening: cities like Austin, San MarcosDenver, Norfolk, and San Diego are pushing back. And it's not nearly as hard a choice as Flock would have you believe: Austinites are weighing the benefits of a surveillance system that generates a hit less than 0.02% of the time against the possibility that scanning 75 million license plates will result in an abortion seeker being tracked down by police, or an immigrant being flagged by ICE in a so-called "sanctuary city." These are not hypothetical risks. It is already happening.

Given how pervasive, sprawling, and ungovernable ALPR sharing networks have become, the only feature update we can truly rely on to protect people's rights and safety is no network at all. And we applaud the communities taking decisive action to dismantle its surveillance infrastructure.

Follow their lead: don't stop de-flocking.

Sarah Hamid

Today's Supreme Court Decision on Age Verification Tramples Free Speech and Undermines Privacy

6 days 5 hours ago

Today’s decision in Free Speech Coalition v. Paxton is a direct blow to the free speech rights of adults. The Court ruled that “no person—adult or child—has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” This ruling allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law.  

Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. We will continue to fight against age restrictions on online access more broadly, such as on social media and specific online features.  

Still, the decision has immense consequences for internet users in Texas and in other states that have enacted similar laws. The Texas law forces adults to submit personal information over the internet to access entire websites that hold some amount of sexual material, not just pages or portions of sites that contain specific sexual materials. Many sites that cannot reasonably implement age verification measures for reasons such as cost or technical requirements will likely block users living in Texas and other states with similar laws wholesale.  

Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. 

Many users will not be comfortable sharing private information to access sites that do implement age verification, for reasons of privacy or concern for data breaches. Many others do not have a driver’s license or photo ID to complete the age verification process. This decision will, ultimately, deter adult users from speaking and accessing lawful content, and will endanger the privacy of those who choose to go forward with verification. 

What the Court Said Today 

In the 6-3 decision, the Court ruled that Texas’ HB 1181 is constitutional. This law requires websites that Texas decides are composed of “one-third” or more of “sexual material harmful to minors” to confirm the age of users by collecting age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.   

In 1997, the Supreme Court struck down a federal online age-verification law in Reno v. American Civil Liberties Union. In that case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB 1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

In Reno and in subsequent cases, the Supreme Court ruled that laws that burden adults’ access to lawful speech are subjected to the highest level of review under the First Amendment, known as strict scrutiny. This level of scrutiny requires a law to be very narrowly tailored or the least speech-restrictive means available to the government.  

That all changed with the Supreme Court’s decision today.  

The Court now says that laws that burden adults’ access to sexual materials that are obscene to minors are subject to less-searching First Amendment review, known as intermediate scrutiny. And under that lower standard, the Texas law does not violate the First Amendment. The Court did not have to respond to arguments that there are less speech-restrictive ways of reaching the same goal—for example, encouraging parents to install content-filtering software on their children’s devices.

The court reached this decision by incorrectly assuming that online age verification is functionally equivalent to flashing an ID at a brick-and-mortar store. As we explained in our amicus brief, this ignores the many ways in which verifying age online is significantly more burdensome and invasive than doing so in person. As we and many others have previously explained, unlike with in-person age-checks, the only viable way for a website to comply with an age verification requirement is to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information.  

This leads to a host of serious anonymity, privacy, and security concerns—all of which the majority failed to address. A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms. Age verification also undermines anonymous internet browsing, even though courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.    

This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception

The Court sidestepped its previous online age verification decisions by claiming the internet has changed too much to follow the precedent from Reno that requires these laws to survive strict scrutiny. Writing for the minority, Justice Kagan disagreed with the premise that the internet has changed: “the majority’s claim—again mistaken—that the internet has changed too much to follow our precedents’ lead.”   

But the majority argues that past precedent does not account for the dramatic expansion of the internet since the 1990s, which has led to easier and greater internet access and larger amounts of content available to teens online. The majority’s opinion entirely fails to address the obvious corollary: the internet’s expansion also has benefited adults. Age verification requirements now affect exponentially more adults than they did in the 1990s and burden vastly more constitutionally protected online speech. The majority's argument actually demonstrates that the burdens on adult speech have grown dramatically larger because of technological changes, yet the Court bizarrely interprets this expansion as justification for weaker constitutional protection. 

What It Means Going Forward 

This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception: the government will not stand in the way of people accessing First Amendment-protected material. There is no question that multiple states will now introduce similar laws to Texas. Two dozen already have, though they are not all in effect. At least three of those states have no limit on the percentage of material required before the law applies—a sweeping restriction on every site that contains any material that the state believes the law includes. These laws will force U.S.-based adult websites to implement age-verification or block users in those states, as many have in the past when similar laws were in effect.  

Rather than submit to verification, research has found that people will choose a variety of other paths: using VPNs to indicate that they are outside of the state, accessing similar sites that don’t comply with the law, often because the site is operating in a different country. While many users will simply not access the content as a result, others may accept the risk, at their peril.   

We expect some states to push the envelope in terms of what content they consider “harmful to minors,” and to expand the type of websites that are covered by these laws, either through updated language or threats of litigation. Even if these attacks are struck down, operators of sites that involve sexual content of any type may be under threat, especially if that information is politically divisive. We worry that the point of some of these laws will be to deter queer folks and others from accessing lawful speech and finding community online by requiring them to identify themselves. We will continue to fight to protect against the disclosure of this critical information and for people to maintain their anonymity. 

EFF Will Continue to Fight for All Users’ Free Expression and Privacy 

That said, the ruling does not give states or Congress the green light to impose age-verification regulations on the broader internet. The majority’s decision rests on the fact that minors do not have a First Amendment right to access sexual material that would be obscene. In short, adults have a First Amendment right to access those sexual materials, while minors do not. Although it was wrong, the majority’s opinion ruled that because Texas is blocking minors from speech they have no constitutional right to access, the age-verification requirement only incidentally burdens adult’s First Amendment rights.  

But the same rationale does not apply to general-audience sites and services, including social media. Minors and adults have coextensive rights to both speak and access the speech of other users on these sites because the vast majority of the speech is not sexual materials that would be obscene to minors. Lawmakers should be careful not to interpret this ruling to mean that broader restrictions on minors’ First Amendment rights, like those included in the Kids Online Safety Act, would be deemed constitutional.  

Free Speech Coalition v. Paxton will have an effect on nearly every U.S. adult internet user for the foreseeable future. It marks a worrying shift in the ways that governments can restrict access to speech online. But that only means we must work harder than ever to protect privacy, security, and free speech as central tenets of the internet.  

Aaron Mackey

Georgia Court Rules for Transparency over Private Police Foundation

6 days 10 hours ago

A Georgia court has decided that private non-profit Atlanta Police Foundation (APF) must comply with public records requests under the Georgia Open Records Act for some of its functions on behalf of the Atlanta Police Department. This is a major win for transparency in the state. 

 The lawsuit was brought last year by the Atlanta Community Press Collective (ACPC) and Electronic Frontier Alliance member Lucy Parsons Labs (LPL). It concerns the APF’s refusal to disclose records about its role as the leaser and manager of the site of so-called Cop City, the Atlanta Public Safety Training Center at the heart of a years-long battle that pitted local social and environmental movements against the APF. We’ve previously written about how APF and similar groups fund police surveillance technology, and how the Atlanta Police Department spied on the social media of activists opposed to Cop City.  

This is a big win for transparency and for local communities who want to maintain their right to know what public agencies are doing. 

Police Foundations often provide resources to police departments that help them avoid public oversight, and the Atlanta Police Foundation leads the way with its maintenance of the Loudermilk Video Intergration Center and its role in Cop City, which will be used by public agencies including the Atlanta and other police departments. 

ACPC and LPL were represented by attorneys Joy Ramsingh, Luke Andrews, and Samantha Hamilton who had won the release of some materials this past December. The plaintiffs had earlier been represented by the University of Georgia School of Law First Amendment Clinic.  

The win comes at just the right time. Last Summer, the Georgia Supreme Court ruled that private contractors working for public entities are subject to open records laws. The Georgia state legislature then passed a bill to make it harder to file public records requests against private entities. With this month’s ruling, there is still time for the Atlanta Police Foundation to appeal the decision, but failing that, they will have to begin to comply with public records requests by the beginning of July.  

We hope that this will help ensure transparency and accountability when government agencies farm out public functions to private entities, so that local activists and journalists will be able to uncover materials that should be available to the general public. 

José Martinez

Two Courts Rule On Generative AI and Fair Use — One Gets It Right

1 week ago

Things are speeding up in generative AI legal cases, with two judicial opinions just out on an issue that will shape the future of generative AI: whether training gen-AI models on copyrighted works is fair use. One gets it spot on; the other, not so much, but fortunately in a way that future courts can and should discount.

The core question in both cases was whether using copyrighted works to train Large Language Models (LLMs) used in AI chatbots is a lawful fair use. Under the US Copyright Act, answering that question requires courts to consider:

  1. whether the use was transformative;
  2. the nature of the works (Are they more creative than factual? Long since published?)
  3. how much of the original was used; and
  4. the harm to the market for the original work.

In both cases, the judges focused on factors (1) and (4).

The right approach

In Bartz v. Anthropic, three authors sued Anthropic for using their books to train its Claude chatbot. In his order deciding parts of the case, Judge William Alsup confirmed what EFF has said for years: fair use protects the use of copyrighted works for training because, among other things, training gen-AI is “transformative—spectacularly so” and any alleged harm to the market for the original is pure speculation. Just as copying books or images to create search engines is fair, the court held, copying books to create a new, “transformative” LLM and related technologies is also protected:

[U]sing copyrighted works to train LLMs to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.

Importantly, Bartz rejected the copyright holders’ attempts to claim that any model capable of generating new written material that might compete with existing works by emulating their “sweeping themes, “substantive points,” or “grammar, composition, and style” was an infringement machine. As the court rightly recognized, building gen-AI models that create new works is beyond “anything that any copyright owner rightly could expect to control.” 

There’s a lot more to like about the Bartz ruling, but just as we were digesting it Kadrey v. Meta Platforms came out. Sadly, this decision bungles the fair use analysis.

A fumble on fair use

Kadrey is another suit by authors against the developer of an AI model, in this case Meta’s ‘Llama’ chatbot. The authors in Kadrey asked the court to rule that fair use did not apply.

Much of the Kadrey ruling by Judge Vince Chhabria is dicta—meaning, the opinion spends many paragraphs on what it thinks could justify ruling in favor of the author plaintiffs, if only they had managed to present different facts (rather than pure speculation). The court then rules in Meta’s favor because the plaintiffs only offered speculation. 

But it makes a number of errors along the way to the right outcome. At the top, the ruling broadly proclaims that training AI without buying a license to use each and every piece of copyrighted training material will be “illegal” in “most cases.” The court asserted that fair use usually won’t apply to AI training uses even though training is a “highly transformative” process, because of hypothetical “market dilution” scenarios where competition from AI-generated works could reduce the value of the books used to train the AI model..

That theory, in turn, depends on three mistaken premises. First, that the most important factor for determining fair use is whether the use might cause market harm. That’s not correct. Since its seminal 1994 opinion in Cambell v Acuff-Rose, the Supreme Court has been very clear that no single factor controls the fair use analysis.

Second, that an AI developer would typically seek to train a model entirely on a certain type of work, and then use that model to generate new works in the exact same genre, which would then compete with the works on which it was trained, such that the market for the original works is harmed. As the Kadrey ruling notes, there was no evidence that Llama was intended to to, or does, anything like that, nor will most LLMs for the exact reasons discussed in Bartz.

Third, as a matter of law, copyright doesn't prevent “market dilution” unless the new works are otherwise infringing. In fact, the whole purpose of copyright is to be an engine for new expression. If that new expression competes with existing works, that’s a feature, not a bug.

Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.

In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey.

Tori Noble

Ahead of Budapest Pride, EFF and 46 Organizations Call on European Commission to Defend Fundamental Rights in Hungary

1 week ago

This week, EFF joined EDRi and nearly 50 civil society organizations urging the European Commission’s President Ursula von der Leyen, Executive Vice President Henna Virkunnen, and Commissioners Michael McGrath and Hadja Lahbib to take immediate action and defend human rights in Hungary.

The European Commission has a responsibility to protect EU fundamental rights, including the rights of LGBTQ+ individuals in Hungary and across the Union

With Budapest Pride just two days away, Hungary has criminalized Pride marches and is planning to deploy real-time facial recognition technology to identify those participating in the event. This is a flagrant violation of fundamental rights, particularly the rights to free expression and assembly.

On April 15, a new amendment package went into effect in Hungary which authorizes the use of real-time facial recognition to identify protesters at ‘banned protests’ like LGBTQ+ events, and includes harsh penalties like excessive fines and imprisonment. This is prohibited by the EU Artificial Intelligence (AI) Act, which does not permit the use of real-time face recognition for these purposes.

This came on the back of members of Hungary’s Parliament rushing through three amendments in March to ban and criminalize Pride marches and their organizers, and permit the use of real-time facial recognition technologies for the identification of protestors. These amendments were passed without public consultation and are in express violation of the EU AI Act and Charter of Fundamental Rights. In response, civil society organizations urged the European Commission to put interim measures in place to rectify the violation of fundamental rights and values. The Commission is yet to respond—a real cause of concern.

This is an attack on LGBTQ+ individuals, as well as an attack on the rights of all people in Hungary. The letter urges the European Commission to take the following actions:

  • Open an infringement procedure against any new violations of EU law, in particular the violation of Article 5 of the AI Act
  • Adopt interim measures on ongoing infringement against Hungary’s 2021 anti LGBT law which is used as a legal basis for the ban on LGBTQIA+ related public assemblies, including Budapest Pride.

There's no question that, when EU law is at stake, the European Commission has a responsibility to protect EU fundamental rights, including the rights of LGBTQ+ individuals in Hungary and across the Union. This includes ensuring that those organizing and marching at Pride in Budapest are safe and able to peacefully assemble and protest. If the EU Commission does not urgently act to ensure these rights, it risks hollowing out the values that the EU is built from.

Read our full letter to the Commission here.

Paige Collings

How Cops Can Get Your Private Online Data

1 week ago

Can the cops get your online data? In short, yes. There are a variety of US federal and state laws which give law enforcement powers to obtain information that you provided to online services. But, there are steps you as a user and/or as a service provider can take to improve online privacy.

Law enforcement demanding access to your private online data goes back to the beginning of the internet. In fact, one of  EFF’s first cases, Steve Jackson Games v. Secret Service, exemplified the now all-too-familiar story where unfounded claims about illegal behavior resulted in overbroad seizures of user messages. But it’s not the ’90s anymore, the internet has become an integral part of everyone’s life. Everyone now relies on organizations big and small to steward our data, from huge service providers like Google, Meta, or your ISP, to hobbyists hosting a blog or Mastodon server

There is no “cloud,” just someone else's computer—and when the cops come knocking on their door, these hosts need to be willing to stand up for privacy, and know how to do so to the fullest extent under the law. These legal limits are also important for users to know, not only to mitigate risks in their security plan when choosing where to share data, but to understand whether these hosts are going to bat for them. Taking action together, service hosts and users can curb law enforcement getting more data than they’re allowed, protecting not just themselves but targeted populations, present and future.

This is distinct from law enforcement’s methods of collecting public data, such as the information now being collected on student visa applicants. Cops may use social media monitoring tools and sock puppet accounts to collect what you share publicly, or even within “private” communities. Police may also obtain the contents of communication in other ways that do not require court authorization, such as monitoring network traffic passively to catch metadata and possibly using advanced tools to partially reveal encrypted information. They can even outright buy information from online data brokers. Unfortunately there are few restrictions or oversight for these practices—something EFF is fighting to change.

Below however is a general breakdown of the legal processes used by US law enforcement for accessing private data, and what categories of private data these processes can disclose. Because this is a generalized summary, it is neither exhaustive nor should be considered legal advice. Please seek legal help if you have specific data privacy and security needs.

Type of data

Process used

Challenge prior to disclosure?

Proof needed

Subscriber information

Subpoena

Yes

Relevant to an investigation

Non-content information, metadata

Court order; sometimes subpoena

Yes

Specific and articulable facts that info is relevant to an investigation

Stored content

Search warrant

No

Probable cause that info will provide evidence of a crime

Content in transit

Super warrant

No

Probable cause plus exhaustion and minimization

Types of Data that Can be Collected

The laws protecting private data online generally follow a pattern: the more sensitive the personal data is, the greater factual and legal burden police have to meet before they can obtain it. Although this is not exhaustive, here are a few categories of data you may be sharing with services, and why police might want to obtain it.

    • Subscriber Data: Information you provide in order to use the service. Think about ID or payment information, IP address location, email, phone number, and other information you provided when signing up. 
      • Law enforcement can learn who controls an anonymous account, and find other service providers to gather information from.
    • Non-content data, or "metadata": This is saved information about your interactions on the service; like when you used the service, for how long, and with whom. Analogous to what a postal worker can infer from a sealed letter with addressing information.
      • Law enforcement can use this information to infer a social graph, login history, and other information about a suspect’s behavior.
      • Stored content: This is the actual content you are sending and receiving, like your direct message history or saved drafts. This can cover any private information your service provider can access. 
        • This most sensitive data is collected to reveal criminal evidence. Overly broad requests also allow for retroactive searches, information on other users, and can take information out of its original context. 
      • Content in transit: This is the content of your communications as it is being communicated. This real-time access may also collect info which isn’t typically stored by a provider, like your voice during a phone call.
        • Law enforcement can compel providers to wiretap their own services for a particular user—which may also implicate the privacy of users they interact with.
    Legal Processes Used to Get Your Data

    When US law enforcement has identified a service that likely has this data, they have a few tools to legally compel that service to hand it over and prevent users from knowing information is being collected.

    Subpoena

    Subpoenas are demands from a prosecutor, law enforcement, or a grand jury which do not require approval of a judge before being sent to a service. The only restriction is this demand be relevant to an investigation. Often the only time a court reviews a subpoena is when a service or user challenges it in court.

    Due to the lack of direct court oversight in most cases, subpoenas are prone to abuse and overreach. Providers should scrutinize such requests carefully with a lawyer and push back before disclosure, particularly when law enforcement tries to use subpoenas to obtain more private data, such as the contents of communications.

    Court Order

    This is a similar demand to subpoenas, but usually pertains to a specific statute which requires a court to authorize the demand. Under the Stored Communications Act, for example, a court can issue an order for non-content information if police provide specific facts that the information being sought is relevant to an investigation. 

    Like subpoenas, providers can usually challenge court orders before disclosure and inform the user(s) of the request, subject to law enforcement obtaining a gag order (more on this below). 

    Search Warrant

    A warrant is a demand issued by a judge to permit police to search specific places or persons. To obtain a warrant, police must submit an affidavit (a written statement made under oath) establishing that there is a fair probability (or “probable cause”) that evidence of a crime will be found at a particular place or on a particular person. 

    Typically services cannot challenge a warrant before disclosure, as these requests are already approved by a magistrate. Sometimes police request that judges also enter gag orders against the target of the warrant that prevent hosts from informing the public or the user that the warrant exists.

    Super Warrant

    Police seeking to intercept communications as they occur generally face the highest legal burden. Usually the affidavit needs to not only establish probable cause, but also make clear that other investigation methods are not viable (exhaustion) and that the collection avoids capturing irrelevant data (minimization). 

    Some laws also require high-level approval within law enforcement, such as leadership, to approve the request. Some laws also limit the types of crimes that law enforcement may use wiretaps in while they are investigating. The laws may also require law enforcement to periodically report back to the court about the wiretap, including whether they are minimizing collection of non-relevant communications. 

    Generally these demands cannot be challenged while wiretapping is occurring, and providers are prohibited from telling the targets about the wiretap. But some laws require disclosure to targets and those who were communicating with them after the wiretap has ended. 

    Gag orders

    Many of the legal authorities described above also permit law enforcement to simultaneously prohibit the service from telling the target of the legal process or the general public that the surveillance is occurring. These non-disclosure orders are prone to abuse and EFF has repeatedly fought them because they violate the First Amendment and prohibit public understanding about the breadth of law enforcement surveillance.

    How Services Can (and Should) Protect You

    This process isn't always clean-cut, and service providers must ultimately comply with lawful demands for user’s data, even when they challenge them and courts uphold the government’s demands. 

    Service providers outside the US also aren’t totally in the clear, as they must often comply with US law enforcement demands. This is usually because they either have a legal presence in the US or because they can be compelled through mutual legal assistance treaties and other international legal mechanisms. 

    However, services can do a lot by following a few best practices to defend user privacy, thus limiting the impact of these requests and in some cases make their service a less appealing door for the cops to knock on.

    Put Cops through the Process

    Paramount is the service provider's willingness to stand up for their users. Carving out exceptions or volunteering information outside of the legal framework erodes everyone's right to privacy. Even in extenuating and urgent circumstances, the responsibility is not on you to decide what to share, but on the legal process. 

    Smaller hosts, like those of decentralized services, might be intimidated by these requests, but consulting legal counsel will ensure requests are challenged when necessary. Organizations like EFF can sometimes provide legal help directly or connect service providers with alternative counsel.

    Challenge Bad Requests

    It’s not uncommon for law enforcement to overreach or make burdensome requests. Before offering information, services can push back on an improper demand informally, and then continue to do so in court. If the demand is overly broad, violates a user's First or Fourth Amendment rights, or has other legal defects, a court may rule that it is invalid and prevent disclosure of the user’s information.

    Even if a court doesn’t invalidate the legal demand entirely, pushing back informally or in court can limit how much personal information is disclosed and mitigate privacy impacts.

    Provide Notice 

    Unless otherwise restricted, service providers should give notice about requests and disclosures as soon as they can. This notice is vital for users to seek legal support and prepare a defense.

    Be Clear With Users 

    It is important for users to understand if a host is committed to pushing back on data requests to the full extent permitted by law. Privacy policies with fuzzy thresholds like "when deemed appropriate" or “when requested” make it ambiguous if a user’s right to privacy will be respected. The best practices for providers not only require clarity and a willingness to push back on law enforcement demands, but also a commitment to be transparent with the public about law enforcement’s demands. For example, with regular transparency reports breaking down the countries and states making these data requests.

    Social media services should also consider clear guidelines for finding and removing sock puppet accounts operated by law enforcement on the platform, as these serve as a backdoor to government surveillance.

    Minimize Data Collection 

    You can't be compelled to disclose data you don’t have. If you collect lots of user data, law enforcement will eventually come demanding it. Operating a service typically requires some collection of user data, even if it’s just login information. But the problem is when information starts to be collected beyond what is strictly necessary. 

    This excess collection can be seen as convenient or useful for running the service, or often as potentially valuable like behavioral tracking used for advertising. However, the more that’s collected, the more the service becomes a target for both legal demands and illegal data breaches. 

    For data that enables desirable features for the user, design choices can make privacy the default and give users additional (preferably opt-in) sharing choices. 

    Shorter Retention

    As another minimization strategy, hosts should regularly and automatically delete information when it is no longer necessary. For example, deleting logs of user activity can limit the scope of law enforcement’s retrospective surveillance—maybe limiting a court order to the last 30 days instead of the lifetime of the account. 

    Again design choices, like giving users the ability to send disappearing messages and deleting them from the server once they’re downloaded, can also further limit the impact of future data requests. Furthermore, these design choices should have privacy-preserving default

    Avoid Data Sharing 

    Depending on the service being hosted there may be some need to rely on another service to make everything work for users. Third-party login or ad services are common examples with some amount of tracking built in. Information shared with these third-parties should also be minimized and avoided, as they may not have a strict commitment to user privacy. Most notoriously, data brokers who sell advertisement data can provide another legal work-around for law enforcement by letting them simply buy collected data across many apps. This extends to decisions about what information is made public by default, thus accessible to many third parties, and if that is clear to users.

    (True) End-to-End Encryption

    Now that HTTPS is actually everywhere, most traffic between a service and a user can be easily secured—for free. This limits what onlookers can collect on users of the service, since messages between the two are in a secure “envelope.” However, this doesn’t change the fact the service is opening this envelope before passing it along to other users, or returning it to the same user. With each opened message, this is more information to defend.

    Better, is end-to-end encryption (e2ee), which just means providing users with secure envelopes that even the service provider cannot open. This is how a featureful messaging app like Signal can respond to requests with only three pieces of information: the account identifier (phone number), the date of creation, and the last date of access. Many services should follow suit and limit access through encryption.

    Note that while e2ee has become a popular marketing term, it is simply inaccurate for describing any encryption use designed to be broken or circumvented. Implementing “encryption backdoors” to break encryption when desired, or simply collecting information before or after the envelope is sealed on a user’s device (“client-side scanning”) is antithetical to encryption. Finally, note that e2ee does not protect against law enforcement obtaining the contents of communications should they gain access to any device used in the conversation, or if message history is stored on the server unencrypted.

    Protecting Yourself and Your Community

    As outlined, often the security of your personal data depends on the service providers you choose to use. But as a user you do still have some options. EFF’s Surveillance Self-Defense is a maintained resource with many detailed steps you can take. In short, you need to assess your risks, limit the services you use to those you can trust (as much as you can), improve settings, and when all else fails, accessorize with tools that prevent data sharing in the first place—like EFF’s Privacy Badger browser extension.

    Remember that privacy is a team sport. It’s not enough to make these changes as an individual, it’s just as important to share and educate others, as well as fighting for better digital privacy policy on all levels of governance. Learn, get organized, and take action.

     

    Rory Mir

    California’s Corporate Cover-Up Act Is a Privacy Nightmare

    1 week 1 day ago

    California lawmakers are pushing one of the most dangerous privacy rollbacks we’ve seen in years. S.B. 690, what we’re calling the Corporate Cover-Up Act, is a brazen attempt to let corporations spy on us in secret, gutting long-standing protections without a shred of accountability.

    The Corporate Cover-Up Act is a massive carve-out that would gut California’s Invasion of Privacy Act (CIPA) and give Big Tech and data brokers a green light to spy on us without consent for just about any reason. If passed, S.B. 690 would let companies secretly record your clicks, calls, and behavior online—then share or sell that data with whomever they’d like, all under the banner of a “commercial business purpose.”

    Simply put, The Corporate Cover-Up Act (S.B. 690) is a blatant attack on digital privacy, and is written to eviscerate long-standing privacy laws and legal safeguards Californians rely on. If passed, it would:

    • Gut California’s Invasion of Privacy Act (CIPA)—a law that protects us from being secretly recorded or monitored
    • Legalize corporate wiretaps, allowing companies to intercept real-time clicks, calls, and communications
    • Authorize pen registers and trap-and-trace tools, which track who you talk to, when, and how—without consent
    • Let companies use all of this surveillance data for “commercial business purposes”—with zero notice and no legal consequences

    This isn’t a small fix. It’s a sweeping rollback of hard-won privacy protections—the kind that helped expose serious abuses by companies like Facebook, Google, and Oracle.

    TAKE ACTION

    You Can't Opt Out of Surveillance You Don't Know Is Happening

    Proponents of The Corporate Cover-Up Act claim it’s just a “clarification” to align CIPA with the California Consumer Privacy Act (CCPA). That’s misleading. The truth is, CIPA and CCPA don’t conflict. CIPA stops secret surveillance. The CCPA governs how data is used after it’s collected, such as through the right to opt out of your data being shared.

    You can't opt out of being spied on if you’re never told it’s happening in the first place. Once companies collect your data under S.B. 690, they can:

    • Sell it to data brokers
    • Share it with immigration enforcement or other government agencies
    • Use it to against abortion seekers, LGBTQ+ people, workers, and protesters, and
    • Retain it indefinitely for profiling

    …with no consent; no transparency; and no recourse.

    The Communities Most at Risk

    This bill isn’t just a tech policy misstep. It’s a civil rights disaster. If passed, S.B. 690 will put the most vulnerable people in California directly in harm’s way:

    • Immigrants, who may be tracked and targeted by ICE
    • LGBTQ+ individuals, who could be outed or monitored without their knowledge
    • Abortion seekers, who could have location or communications data used against them
    • Protesters and workers, who rely on private conversations to organize safely

    The message this bill sends is clear: corporate profits come before your privacy.

    We Must Act Now

    S.B. 690 isn’t just a bad tech bill—it’s a dangerous precedent. It tells every corporation: Go ahead and spy on your consumers—we’ve got your back.

    Californians deserve better.

    If you live in California, now is the time to call your lawmakers and demand they vote NO on the Corporate Cover-Up Act.

    TAKE ACTION

    Spread the word, amplify the message, and help stop this attack on privacy before it becomes law.

    Rindala Alajaji

    FBI Warning on IoT Devices: How to Tell If You Are Impacted

    1 week 1 day ago

    On June 5th, the FBI released a PSA titled “Home Internet Connected Devices Facilitate Criminal Activity.” This PSA largely references devices impacted by the latest generation of BADBOX malware (as named by HUMAN’s Satori Threat Intelligence and Research team) that EFF researchers also encountered primarily on Android TV set-top boxes. However, the malware has impacted tablets, digital projectors, aftermarket vehicle infotainment units, picture frames, and other types of IoT devices. 

    One goal of this malware is to create a network proxy on the devices of unsuspecting buyers, potentially making them hubs for various potential criminal activities, putting the owners of these devices at risk from authorities. This malware is particularly insidious, coming pre-installed out of the box from major online retailers such as Amazon and AliExpress. If you search “Android TV Box” on Amazon right now, many of the same models that have been impacted are still up being sold by sellers of opaque origins. Facilitating the sale of these devices even led us to write an open letter to the FTC, urging them to take action on resellers.

    The FBI listed some indicators of compromise (IoCs) in the PSA for consumers to tell if they were impacted. But the average person isn’t running network detection infrastructure in their homes, and cannot hope to understand what IoCs can be used to determine if their devices generate “unexplained or suspicious Internet traffic.” Here, we will attempt to help give more comprehensive background information about these IoCs. If you find any of these on devices you own, then we encourage you to follow through by contacting the FBI's Internet Crime Complaint Center (IC3) at www.ic3.gov.

    The FBI lists these IoC:

    • The presence of suspicious marketplaces where apps are downloaded.
    • Requiring Google Play Protect settings to be disabled.
    • Generic TV streaming devices advertised as unlocked or capable of accessing free content.
    • IoT devices advertised from unrecognizable brands.
    • Android devices that are not Play Protect certified.
    • Unexplained or suspicious Internet traffic.

    The following adds context to above, as well as some added IoCs we have seen from our research.

    Play Protect Certified

    “Android devices that are not Play Protect certified” refers to any device brand or partner not listed here: https://www.android.com/certified/partners/. Google subjects devices to compatibility and security tests in their criteria for inclusion in the Play Protect program, though the mentioned list’s criteria are not made completely transparent outside of Google. But this list does change, as we saw with the tablet brand we researched being de-listed. This encompasses “devices advertised from unrecognizable brands.” The list includes international brands and partners as well.

    Outdated Operating Systems

    Other issues we saw were really outdated Android versions. For posterity, Android 16 just started rolling out. Android 9-12 appeared to be the most common versions routinely used. This could be a result of “copied homework” from previous legitimate Android builds, and often come with their own update software that can present a problem on its own and deliver second-stage payloads for device infection in addition to what it is downloading and updating on the device.

    You can check which version of Android you have by going to Settings and searching “Android version”.

    Android App Marketplaces

    We’ve previously argued how the availability of different app marketplaces leads to greater consumer choice, where users can choose alternatives even more secure than the Google Play Store. While this is true, the FBI’s warning about suspicious marketplaces is also prudent. Avoiding “downloading apps from unofficial marketplaces advertising free streaming content” is sound (if somewhat vague) advice for set-top boxes, yet this recommendation comes without further guidelines on how to identify which marketplaces might be suspicious for other Android IoT platforms. Best practice is to investigate any app stores used on Android devices separately, but to be aware that if a suspicious Android device is purchased, it can contain preloaded app stores that mimic the functionality of legitimate ones but also contain unwanted or malicious code.

    Models Listed from the Badbox Report

    We also recommend looking up device names and models that were listed in the BADBOX 2.0 report. We investigated the T95 models along with other independent researchers that initially found this malware present. A lot of model names could be grouped in families with the same letters but different numbers. These operations are iterating fast, but the naming conventions are often lazy in this respect. If you're not sure what model you own, you can usually find it listed on a sticker somewhere on the device. If that fails, you may be able to find it by pulling up the original receipt or looking through your order history.

    A Note from Satori Researchers:

    “Below is a list of device models known to be targeted by the threat actors. Not all devices of a given model are necessarily infected, but Satori researchers are confident that infections are present on some devices of the below device models:”

    List of Potentially Impacted Models

    Broader Picture: The Digital Divide

    Unfortunately, the only way to be sure that an Android device from an unknown brand is safe is not to buy it in the first place. Though initiatives like the U.S. Cyber Trust Mark are welcome developments intended to encourage demand-side trust in vetted products, recent shake ups in federal regulatory bodies means the future of this assurance mark is unknown. This means those who face budget constraints and have trouble affording top-tier digital products for streaming content or other connected purposes may rely on cheaper imitation products that are rife with not only vulnerabilities, but even come out-of-the-box preloaded with malware. This puts these people disproportionately at legal risk when these devices are used to provide the buyers’ home internet connection as a proxy for nefarious or illegal purposes.

    Cybersecurity and trust that the products we buy won’t be used against us is essential: not just for those that can afford name-brand digital devices, but for everyone. While we welcome the IoCs that the FBI has listed in its PSA, more must be done to protect consumers from a myriad of dangers that their devices expose them to.

    Alexis Hancock

    Why Are Hundreds of Data Brokers Not Registering with States?

    1 week 2 days ago

    Written in collaboration with Privacy Rights Clearinghouse

    Hundreds of data brokers have not registered with state consumer protection agencies. These findings come as more states are passing data broker transparency laws that require brokers to provide information about their business and, in some cases, give consumers an easy way to opt out.

    In recent years, California, Texas, Oregon, and Vermont have passed data broker registration laws that require brokers to identify themselves to state regulators and the public. A new analysis by Privacy Rights Clearinghouse (PRC) and the Electronic Frontier Foundation (EFF) reveals that many data brokers registered in one state aren’t registered in others.

    Companies that registered in one state but did not register in another include: 291 companies that did not register in California, 524 in Texas, 475 in Oregon, and 309 in Vermont. These numbers come from data analyzed from early April 2025.

    PRC and EFF sent letters to state enforcement agencies urging them to investigate these findings. More investigation by states is needed to determine whether these registration discrepancies reflect widespread noncompliance, gaps and definitional differences in the various state laws, or some other explanation.

    New data broker transparency laws are an essential first step to reining in the data broker industry. This is an ecosystem in which your personal data taken from apps and other web services can be bought and sold largely without your knowledge. The data can be highly sensitive like location information, and can be used to target you with ads, discriminate against you, and even enhance government surveillance. The widespread sharing of this data also makes it more susceptible to data breaches. And its easy availability allows personal data to be obtained by bad actors for phishing, harassment, or stalking.

    Consumers need robust deletion mechanisms to remove their data stored and sold by these companies. But the potential registration gaps we identified threaten to undermine such tools. California’s Delete Act will soon provide consumers with an easy tool to delete their data held by brokers—but it can only work if brokers register. California has already brought a handful of enforcement actions against brokers who failed to register under that law, and such compliance efforts are becoming even more critical as deletion mechanisms come online.

    It is important to understand the scope of our analysis.

    This analysis only includes companies that registered in at least one state. It does not capture data brokers that completely disregard state laws by failing to register in any state. A total of 750 data brokers have registered in at least one state. While harder to find, shady data brokers who have failed to register anywhere should remain a primary enforcement target.

    This analysis also does not claim or prove that any of the data brokers we found broke the law. While the definition of “data broker” is similar across states, there are variations that could require a company to register in one state and not another. To take one example, a data broker registered in Texas that only brokers the data of Texas residents would not be legally required to register in California. To take another, a data broker that registered with Vermont in 2020 that then changed its business model and is no longer a broker, would not be required to register in 2025. More detail on variations in data broker laws is outlined in our letters to regulators.

    States should investigate compliance with data broker registration requirements, enforce their laws, and plug any loopholes. Ultimately, consumers deserve protections regardless of where they reside, and Congress should also work to pass baseline federal data broker legislation that minimizes collection and includes strict use and disclosure limits, transparency obligations, and consumer rights.

    Read more here:

    California letter

    Texas Letter

    Oregon Letter

    Vermont Letter

    Spreadsheet of data brokers

    Mario Trujillo

    Major Setback for Intermediary Liability in Brazil: Risks and Blind Spots

    1 week 2 days ago

    This is the third post of a series about internet intermediary liability in Brazil. Our first post gives an overview of Brazil's current internet intermediary liability regime, set out in a law known as "Marco Civil da Internet," the context of its approval in 2014, and the beginning of the Supreme Court's judgment of such regime in November 2024. Our second post provides a bigger picture of the Brazilian context underlying the court's analysis and its most likely final decision. 

    The court’s examination of Marco Civil’s Article 19 began with Justice Dias Toffoli in November last year. We explained here about the cases under trial, the reach of the Supreme Court’s decision, and Article 19’s background related to Marco Civil’s approval in 2014. We also highlighted some aspects and risks of Justice Dias Toffoli’s vote, who considered the intermediary liability regime established in Article 19 unconstitutional.  

    Most of the justices have agreed to find this regime at least partially unconstitutional, but differ on the specifics. Relevant elements of their votes include: 

    • Notice-and-takedown is likely to become the general rule for platforms' liability for third-party content (based on Article 21 of Marco Civil). Justices still have to settle whether this applies to internet applications in general or if some distinctions are relevant, for example, applying only to those that curate or recommend content. Another open question refers to the type of content subject to liability under this rule: votes pointed to unlawful content/acts, manifestly criminal or clearly unlawful content, or opted to focus on crimes. Some justices didn’t explicitly qualify the nature of the restricted content under this rule.   

    • If partially valid, the need for a previous judicial order to hold intermediaries liable for user posts (Article 19 of Marco Civil) remains in force for certain types of content (or certain types of internet applications). For some justices, Article 19 should be the liability regime in the case of crimes against honor, such as defamation. Justice Luís Roberto Barroso also considered this rule should apply for any unlawful acts under civil law. Justice Cristiano Zanin has a different approach. For him, Article 19 should prevail for internet applications that don’t curate, recommend or boost content (what he called “neutral” applications) or when there’s reasonable doubt about whether the content is unlawful.

    • Platforms are considered liable for ads and boosted content that they deliver to users. This was the position held by most of the votes so far. Justices did so either by presuming platforms’ knowledge of the paid content they distribute, holding them strictly liable for paid posts, or by considering the delivery of paid content as platforms’ own act (rather than “third-party” conduct). Justice Dias Toffoli went further, including also non-paid recommended content. Some justices extended this regime to content posted by inauthentic or fake accounts, or when the non-identification of accounts hinders holding the content authors liable for their posts.   

    • Monitoring duty of specific types of harmful and/or criminal content. Most concerning is that different votes establish some kind of active monitoring and likely automated restriction duty for a list of contents, subject to internet applications' liability. Justices have either recognized a “monitoring duty” or considered platforms liable for these types of content regardless of a previous notification. Justices Luís Roberto Barroso, Cristiano Zanin, and Flávio Dino adopt a less problematic systemic flaw approach, by which applications’ liability would not derive from each piece of content individually, but from an analysis of whether platforms employ the proper means to tackle these types of content. The list of contents also varies. In most of the cases they are restricted to criminal offenses, such as crimes against the democratic state, racism, and crimes against children and adolescents; yet they may also include vaguer terms, like “any violence against women,” as in Justice Dias Toffoli’s vote. 

    • Complementary or procedural duties. Justices have also voted to establish complementary or procedural duties. These include providing a notification system that is easily accessible to users, a due process mechanism where users can appeal against content restrictions, and the release of periodic transparency reports. Justice Alexandre de Moraes also specifically mentioned algorithmic transparency measures. 

    • Oversight. Justices also discussed which entity or oversight model should be used to monitor compliance while Congress doesn’t approve a specific regulation. They raised different possibilities, including the National Council of Justice, the General Attorney’s Office, the National Data Protection Authority, a self-regulatory body, or a multistakeholder entity with government, companies, and civil society participation. 

    Three other justices have yet to present their votes to complete the judgment. As we pointed out, the ruling will both decide the individual cases that entered the Supreme Court through appeals and the “general repercussion” issues underlying these individual cases. For addressing such general repercussion issues, the Supreme Court approves a thesis that orients lower court decisions in similar cases. The final thesis will reflect the majority of the court's agreements around the topics we outlined above. 

    Justice Alexandre de Moraes argued that the final thesis should equate the liability regime of social media and private messaging applications to the one applied to traditional media outlets. This disregards important differences between both: even if social media platforms curate content, it involves a massive volume of third-party posts, mainly organized through algorithms. Although such curation reflects business choices, it does not equate to media outlets that directly create or individually purchase specific content from approved independent producers. This is even more complicated with messaging applications, seriously endangering privacy and end-to-end encryption. 

    Justice André Mendonça was the only one so far to preserve the full application of Article 19. His proposed thesis highlighted the necessity of safeguarding privacy, data protection, and the secrecy of communications in messaging applications, among other aspects. It also indicated that judicial takedown orders must provide specific reasoning and be made available to platforms, even if issued within a sealed proceeding. The platform must also have the ability to appeal the takedown order. These are all important points the final ruling should endorse. 

    Risks and Blind Spots 

    We have stressed the many problems entangled with broad notice-and-takedown mandates and expanded content monitoring obligations. Extensively relying on AI-based content moderation and tying it to intermediary liability for user content will likely exacerbate the detrimental effects of these systems’ limitations and flaws. The perils and concerns that grounded Article 19's approval remain valid and should have led to a position of the court preserving its regime.  

    However, given the judgement’s current stage, there are still some minimum safeguards that justices should consider or reinforce to reduce harm.  

    It’s crucial to put in place guardrails against the abuse and weaponization of notification mechanisms. At a minimum, platforms shouldn’t be liable following an extrajudicial notification when there’s reasonable doubt concerning the content’s lawfulness. In addition, notification procedures should make sure that notices are sufficiently precise and properly substantiated indicating the content’s specific location (e.g. URL) and why the notifier considers it to be illegal. Internet applications must also provide reasoned justification and adequate appeal mechanisms for those who face content restrictions.  

    On the other hand, holding intermediaries liable for individual pieces of user content regardless of notification, by massively relying on AI-based content flagging, is a recipe for over censorship. Adopting a systemic flaw approach could minimally mitigate this problem. Moreover, justices should clearly set apart private messaging applications, as mandated content-based restrictions would erode secure and end-to-end encrypted implementations. 

    Finally, we should note that justices generally didn’t distinguish large internet applications from other providers when detailing liability regimes and duties in their votes. This is one major blind spot, as it could significantly impact the feasibility of alternate and decentralized alternatives to Big Tech’s business models, entrenching platform concentration. Similarly, despite criticism of platforms’ business interests in monetizing and capturing user attention, court debates mainly failed to address the pervasive surveillance infrastructure lying underneath Big Tech’s power and abuses.   

    Indeed, while justices have called out Big Tech’ enormous power over the online flow of information – over what’s heard and seen, and by whom – the consequences of this decision can actually deepen this powerful position. 

    It’s worth recalling a line of Aaron Schwarz in the film “The Internet’s Own Boy” when comparing broadcasting and the internet. He said: “[…] what you see now is not a question of who gets access to the airwaves, it’s a question of who gets control over the ways you find people.” As he puts it, today’s challenge is less about who gets to speak, but rather about who gets to be heard.  

    There’s an undeniable source of power in operating the inner rules and structures by which the information flows within a platform with global reach and millions of users. The crucial interventions must aim at this source of power, putting a stop to behavioral surveillance ads, breaking Big Tech’s gatekeeper dominance, and redistributing the information flow.  

    That’s not to say that we shouldn’t care about how each platform organizes its online environment. We should, and we do. The EU Digital Services Act, for example, established rules in this sense, leaving the traditional liability regime largely intact. Rather than leveraging platforms as users’ speech watchdogs by potentially holding intermediaries liable for each piece of user content, platform accountability efforts should broadly look at platforms’ processes and business choices. Otherwise, we will end up focusing on monitoring users instead of targeting platforms’ abuses. 

    Veridiana Alimonti

    Major Setback for Intermediary Liability in Brazil: How Did We Get Here?

    1 week 2 days ago

    This is the second post of a series about intermediary liability in Brazil. Our first post gives an overview of Brazil's current intermediary liability regime, the context of its approval in 2014, and the beginning of the Supreme Court's analysis of such regime in November 2024. Our third post provides an outlook on justices' votes up until June 23, underscoring risks, mitigation measures, and blind spots of their potential decision.

    The Brazilian Supreme Court has formed a majority to overturn the country’s current online intermediary liability regime. With eight out of eleven justices having presented their opinions, the court has reached enough votes to mostly remove the need for a previous judicial order demanding content takedown to hold digital platforms liable for user posts, which is currently the general rule.  

    The judgment relates to Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet,” Law n. 12.965/2014), wherein internet applications can only be held liable for third-party content if they fail to comply with a judicial decision ordering its removal. Article 19 aligns with the Manila Principles and reflects the important understanding that holding platforms liable for user content without a judicial analysis creates strong incentives for enforcement overreach and over censorship of protected speech.  

    Nonetheless, while Justice André Mendonça voted to preserve Article 19’s application, four other justices stated it should prevail only in specific cases, mainly for crimes against honor (such as defamation). The remaining three justices considered that Article 19 offers insufficient protection to constitutional guarantees, such as the integral protection of children and teenagers.  

    The judgment will resume on June 25th, with the three final justices completing the analysis by the plenary of the court. Whereas Article 19’s partial unconstitutionality (or its interpretation “in accordance with” the Constitution) seems to be the position the majority of the court will take, the details of each vote vary, indicating important agreements still to sew up and critical tweaks to make.   

    As we previously noted, the outcome of this ruling can seriously undermine free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. This trend could negatively shape developments globally in other courts, parliaments, or with respect to executive powers. Sadly, the votes so far have aggravated these concerns.  

    But before we get to them, let's look at some circumstances underlying the Supreme Court's analysis. 

    2014 vs. 2025: The Brazilian Techlash After Marco Civil's Approval 

    How did Article 19 end up (mostly) overturned a decade after Marco Civil’s much-celebrated approval in Brazil back in 2014?   

    In addition to the broader techlash following the impacts of an increasing concentration of power in the digital realm, developments in Brazil have leveraged a harsher approach towards internet intermediaries. Marco Civil became a scapegoat, especially Article 19, within regulatory approaches that largely diminished the importance of the free expression concerns that informed its approval. Rather than viewing the provision as a milestone to be complemented with new legislation, this context has reinforced the view that Article 19 should be left behind. 

     The tougher approach to internet intermediaries gained steam after former President Jair Bolsonaro’s election in 2018 and throughout the legislative debates around draft bill 2630, also known as the “Fake News bill.”  

    Specifically, though not exhaustive, concerns around the spread of disinformation, online-fueled discrimination, and political violence, as well as threats to election integrity, constitute an important piece of this scenario. This includes the use of social media by the far right within the escalation of acts seeking to undermine the integrity of elections and ultimately overthrow the legitimately elected President Luis Inácio da Silva in January 2023. Investigations later unveiled that related plans included killing the new president, the vice-president, and Justice Alexandre de Moraes.  

    Concerns over child and adolescents’ rights and safety are another part of the underlying context. Among others, a wave of violent threats and actual attacks in schools in early 2023 was bolstered by online content. Social media challenges also led to injuries and deaths of young people.  

    Finally, the political reactions to Big Tech’s alignment with far-right politicians and feuds with Brazilian authorities complete this puzzle. It includes reactions to Meta’s policy changes in January 2025 and the Trump’s administration’s decision to restrict visas to foreign officials based on grounds of limiting free speech online. This decision is viewed as an offensive against Brazil's Supreme Court from U.S. authorities in alliance with Bolsonaro’s supporters, including his son now living in the U.S

    Changes in the tech landscape, including concerns about the attention-driven information flow, alongside geopolitical tensions, landed in Article 19 examination by the Brazilian Supreme Court. Hurdles in the legislative debate of draft bill 2630 turned attention to the internet intermediary liability cases pending in the Supreme Court as the main vehicles for providing “some” response. Yet, the scope of such cases (explained here) determined the most likely outcome. As they focus on assessing platform liability for user content and whether it involves a duty to monitor, these issues became the main vectors for analysis and potential change. Alternative approaches, such as improving transparency, ensuring due process, and fostering platform accountability through different measures, like risk assessments, were mainly sidelined.  

    Read our third post in this series to learn more about the analysis of the Supreme Court so far and its risks and blind spots. 

    Veridiana Alimonti

    Copyright Cases Should Not Threaten Chatbot Users’ Privacy

    1 week 2 days ago

    Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

    The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per day, often for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy.

    This isn’t a new concept. Putting users in control of their data is a fundamental piece of privacy protection. Nineteen states, the European Union, and numerous other countries already protect the right to delete under their privacy laws. These rules exist for good reasons: retained data can be sold or given away, breached by hackers, disclosed to law enforcement, or even used to manipulate a user’s choices through online behavioral advertising.

    While appropriately tailored orders to preserve evidence are common in litigation, that’s not what happened here. The court disregarded the privacy rights of millions of ChatGPT users without any reasonable basis to believe it would yield evidence. The court granted the order based on unsupported assertions that users who delete their data are probably copyright infringers looking to “cover their tracks.” This is simply false, and it sets a dangerous precedent for cases against generative AI developers and other companies that have vast stores of user information. Unless courts limit orders to information that is actually relevant and useful, they will needlessly violate the privacy rights of millions of users.

    OpenAI is challenging this order. EFF urges the court to lift the order and correct its mistakes.  

    Tori Noble

    The NO FAKES Act Has Changed – and It’s So Much Worse

    1 week 3 days ago

    A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.

    The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.

    Take Action

    Tell Congress to Say No to NO FAKES

    The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.

    The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters;  c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

    This bill would be a disaster for internet speech and innovation.

    Targeting Tools

    The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics. 

    Takedown Notices and Filter Mandate

    The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future.  In other words, adopt broad filters or lose the safe harbor.

    Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.

    But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.

    The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.

    Threats to Anonymous Speech

    As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.

    We've already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant's own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.

    Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.

    Threats to Innovation

    Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.

    Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity.  For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?

    This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.

    NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.

    Take Action

    Tell Congress to Say No to NO FAKES

    Katharine Trendacosta

    New Journalism Curriculum Module Teaches Digital Security for Border Journalists

    1 week 3 days ago
    Module Developed by EFF, Freedom of the Press Foundation, and University of Texas, El Paso Guides Students Through Threat Modeling and Preparation

    SAN FRANCISCO – A new college journalism curriculum module teaches students how to protect themselves and their digital devices when working near and across the U.S.-Mexico border. 

    “Digital Security 101: Crossing the US-Mexico Border” was developed by Electronic Frontier Foundation (EFF) Director of Investigations Dave Maass and Dr. Martin Shelton, deputy director of digital security at Freedom of the Press Foundation (FPF), in collaboration with the University of Texas at El Paso (UTEP) Multimedia Journalism Program and Borderzine

    The module offers a step-by-step process for improving the digital security of journalists passing through U.S. Land Ports of Entry, focusing on threat modeling: thinking through what you want to protect, and what actions you can take to secure it. 

    This involves assessing risk according to the kind of work the journalist is doing, the journalist’s own immigration status, potential adversaries, and much more, as well as planning in advance for protecting oneself and one’s devices should the journalist face delay, detention, search, or device seizure. Such planning might include use of encrypted communications, disabling or enabling certain device settings, minimizing the data on devices, and mentally preparing oneself to interact with border authorities.  

    The module, in development since early 2023, is particularly timely given increasingly invasive questioning and searches at U.S. borders under the Trump Administration and the documented history of border authorities targeting journalists covering migrant caravans during the first Trump presidency. 

    "Today's journalism students are leaving school only to face complicated, new digital threats to press freedom that did not exist for previous generations. This is especially true for young reporters serving border communities," Shelton said. "Our curriculum is designed to equip emerging journalists with the skills to protect themselves and sources, while this new module is specifically tailored to empower students who must regularly traverse ports of entry at the U.S.-Mexico border while carrying their phones, laptops, and multimedia equipment." 

    The guidance was developed through field visits to six ports of entry across three border states, interviews with scores of journalists and students from on both sides of the border, and a comprehensive review of CBP policies, while also drawing from EFF and FPF’s combined decades of experience researching constitutional rights and security techniques when it comes to our devices.  

    “While this training should be helpful to investigative journalists from anywhere in the country who are visiting the borderlands, we put journalism students based in and serving border communities at the center of our work,” Maass said. “Whether you’re reviewing the food scene in San Diego and Tijuana, covering El Paso and Ciudad Juarez’s soccer teams, reporting on family separation in the Rio Grande Valley, or uncovering cross-border corruption, you will need the tools to protect your work and sources." 

    The module includes a comprehensive slide deck that journalism lecturers can use and remix for their classes, as well as an interactive worksheet. With undergraduate students in mind, the module includes activities such as roleplaying a primary inspection interview and analyzing pop singer Olivia Rodrigo’s harrowing experience of mistaken identity while reentering the country. The module has already been delivered successfully in trainings with journalism students at UTEP and San Diego State University. 

    “UTEP’s Multimedia Journalism program is well-situated to help develop this digital security training module,” said UTEP Communication Department Chair Dr. Richard Pineda. “Our proximity to the U.S.-Mexico border has influenced our teaching models, and our student population – often daily border crossers – give us a unique perspective from which to train journalists on issues related to reporting safely on both sides of the border.” 

    For the “Digital security 101: Crossing the US-Mexico border” module: https://freedom.press/digisec/blog/border-security-module/ 

    For more about the module: https://www.eff.org/deeplinks/2025/06/journalist-security-checklist-preparing-devices-travel-through-us-border

    For EFF’s guide to digital security at the U.S. border: https://www.eff.org/press/releases/digital-privacy-us-border-new-how-guide-eff 

    For EFF’s student journalist Surveillance Self Defense guide: https://ssd.eff.org/playlist/journalism-student 

    Contact:  DaveMaassDirector of Investigationsdm@eff.org
    Josh Richman

    A Journalist Security Checklist: Preparing Devices for Travel Through a US Border

    1 week 3 days ago

    This post was originally published by the Freedom of the Press Foundation (FPF). This checklist complements the recent training module for journalism students in border communities that EFF and FPF developed in partnership with the University of Texas at El Paso Multimedia Journalism Program and Borderzine. We are cross-posting it under FPF's Creative Commons Attribution 4.0 International license. It has been slightly edited for style and consistency.

    Before diving in: This space is changing quickly! Check FPF's website for updates and contact them with questions or suggestions. This is a joint project of Freedom of the Press Foundation (FPF) and the Electronic Frontier Foundation.

    Those within the U.S. have Fourth Amendment protections against unreasonable searches and seizures — but there is an exception at the border. Customs and Border Protection (CBP) asserts broad authority to search travelers’ devices when crossing U.S. borders, whether traveling by land, sea, or air. And unfortunately, except for a dip at the start of the COVID-19 pandemic when international travel substantially decreased, CBP has generally searched more devices year over year since the George W. Bush administration. While the percentage of travelers affected by device searches remains small, in recent months we’ve heard growing concerns about apparent increased immigration scrutiny and enforcement at U.S. ports of entry, including seemingly unjustified device searches.

    Regardless, it’s hard to say with certainty the likelihood that you will experience a search of your items, including your digital devices. But there’s a lot you can do to lower your risk in case you are detained in transit, or if your devices are searched. We wrote this checklist to help journalists prepare for transit through a U.S. port of entry while preserving the confidentiality of your most sensitive information, such as unpublished reporting materials or source contact information. It’s important to think about your strategy in advance, and begin planning which options in this checklist make sense for you.

    First thing’s first: What might CBP do?

    U.S. CBP’s policy is that they may conduct a “basic” search (manually looking through information on a device) for any reason or no reason at all. If they feel they have reasonable suspicion “of activity in violation of the laws enforced or administered by CBP” or if there is a “national security concern,” they may conduct what they call an “advanced” search, which may include connecting external equipment to your device, such as a forensic analysis tool designed to make a copy of your data.

    Your citizenship status matters as to whether you can refuse to comply with a request to unlock your device or provide the passcode. If you are a U.S. citizen entering the U.S., you have the most legal leverage to refuse to comply because U.S. citizens cannot be denied entry — they must be let back into the country. But note that if you are a U.S. citizen, you may be subject to escalated harassment and further delay at the port of entry, and your device may be seized for days, weeks, or months.

    If CBP officers seek to search your locked device using forensic tools, there is a chance that some (if not all of the) information on the device will be compromised. But this probability depends on what tools are available to government agents at the port of entry, if they are motivated to seize your device and send it elsewhere for analysis, and what type of device, operating system, and security features your device has. Thus, it is also possible that strong encryption may substantially slow down or even thwart a government device search.

    Lawful permanent residents (green-card holders) must generally also be let back into the country. However, the current administration seems more willing to question LPR status, so refusing to comply with a request to unlock a device or provide a passcode may be risky for LPRs. Finally, CBP has broad discretion to deny entry to foreign nationals arriving on a visa or via the visa waiver program.

    At present, traveling domestically within the United States, particularly if you are a U.S. citizen, is lower risk than travelling internationally. Our luggage and the physical aspects of digital devices may be searched — e.g., manual inspection or x-rays to ensure a device is not a bomb. CBP is often present at airports, but for domestic travel within the U.S. you should only be interacting with the Transportation Security Administration. TSA does not assert authority to search the data on your device — this is CBP’s role.

    At an international airport or other port of entry, you have to decide whether you will comply with a request to access your device, but this might not feel like much of a choice if you are a non-U.S. citizen entering the country! Plan accordingly.

    Your border digital security checklist Preparing for travel

    Make a backup of each of your devices before traveling.
    Use long, unpredictable, alphanumeric passcodes for your devices and commit those passwords to memory.
    ☐ If bringing a laptop, ensure it is encrypted using BitLocker for Windows, or FileVault for macOS. Chromebooks are encrypted by default. A password-protected laptop screen lock is usually insufficient. When going through security, devices should be turned all the way off.
    ☐ Fully update your device and apps.
    ☐ Optional: Use a password manager to help create and store randomized passcodes. 1Password users can create temporary travel vaults.
    ☐ Bring as few sensitive devices as possible — only what you need.
    ☐ Regardless which country you are visiting, think carefully about what you are willing to post publicly on social media about that country to avoid scrutiny.
    ☐ For land ports of entry in the U.S., check CBP’s border wait times and plan accordingly.
    ☐ If possible, print out any travel documents in advance to avoid the necessity to unlock your phone during boarding, including boarding passes for your departure and return, rental car information, and any information about your itinerary that you would like to have on hand if questioned (e.g., hotel bookings, visa paperwork, employment information if applicable, conference information). Use a printer you trust at home or at the office, just in case.
    ☐ Avoid bringing sensitive physical documents you wouldn’t want searched. If you need them, consider digitizing them (e.g., by taking a photo) and storing them remotely on a cloud service or backup device.

    Decide in advance whether you will unlock your device or provide the passcode for a search. Your overall likelihood of experiencing a device search is low (e.g., less than .01% of international travelers are selected), but depending on what information you carry, the impact of a search may be quite high. If you plan to unlock your device for a search or provide the passcode, ensure your devices are prepared:

    ☐ Upload any information you would like to keep in cloud providers in advance (e.g., using iCloud) that you would like stored remotely, instead of locally on your device.
    ☐ Remove any apps, files, chat histories, browsing histories, and sensitive contacts you would not want exposed during a search.
    ☐ If you delete photos or files, delete them a second time in the “Recently Deleted” or “Trash” sections of your Files and Photos apps.
    ☐ Remove messages from the device that you believe would draw unwanted scrutiny. Remove yourself — even if temporarily — from chat groups on platforms like Signal.
    ☐ If you use Signal and plan to keep it on your device, use disappearing messages to minimize how much information you keep within the app.
    ☐ Optional: Bring a travel device instead of your usual device. Ensure it is populated with the apps you need while traveling, as well as login credentials (e.g., stored in a password manager), and necessary files. If you do this, ensure your trusted contacts know how to reach you on this device.
    ☐ Optional: Rather than manually removing all sensitive files from your computer, if you are primarily accessing web services during your travels, a Chromebook may be an affordable alternative to your regular computer.
    ☐ Optional: After backing up your devices for every day use, factory reset it and add only the information you need back onto the device.
    ☐ Optional: If you intend to work during your travel, plan in advance with a colleague who can remotely assist you in accessing and/or rotating necessary credentials.
    ☐ If you don’t plan to work, consider discussing with your IT department whether temporarily suspending your work accounts could mitigate risks at border crossings.

    On the day of travel

    ☐ Log out of accounts you do not want accessible to border officials. Note that border officers do not have authority to access live cloud content — they must put devices in airplane mode or otherwise disconnect them from the internet.
    ☐ Power down your phone and laptop entirely before going through security. This will enable disk encryption, and make it harder for someone to analyze your device.
    ☐ Immediately before travel, if you have a practicing attorney who has expertise in immigration and border issues, particularly related to members of the media, make sure you have their contact information written down before visiting.
    ☐ Immediately before travel, ensure that a friend, relative, or colleague is aware of your whereabouts when passing through a port of entry, and provide them with an update as soon as possible afterward.

    If you are pulled into secondary screening

    ☐ Be polite and try not to emotionally escalate the situation.
    ☐ Do not lie to border officials, but don’t offer any information they do not explicitly request.
    ☐ Politely request officers’ names and badge numbers.
    ☐ If you choose to unlock your device, rather than telling border officials your passcode, ask to type it in yourself.
    ☐ Ask to be present for a search of your device. But note officers are likely to take your device out of your line of sight.
    ☐ You may decline the request to search your device, but this may result in your device being seized and held for days, weeks, or months. If you are not a U.S. citizen, refusal to comply with a search request may lead to denial of entry, or scrutiny of lawful permanent resident status.
    ☐ If your device is seized, ask for a custody receipt (Form 6051D). This should also list the name and contact information for a supervising officer.
    ☐ If an officer has plugged your unlocked phone or computer into another electronic device, they may have obtained a forensic copy of your device. You will want to remember anything you can about this event if it happens.
    ☐ Immediately afterward, write down as many details as you can about the encounter: e.g., names, badge numbers, descriptions of equipment that may have been used to analyze the device, changes to the device or corrupted data, etc.

    Reporting is not a crime. Be confident knowing you haven’t done anything wrong.

    More resources
    Guest Author

    EFF to European Commission: Don’t Resurrect Illegal Data Retention Mandates

    1 week 3 days ago

    The mandatory retention of metadata is an evergreen of European digital policy. Despite a number of rulings by Europe’s highest court, confirming again and again the incompatibility of general and indiscriminate data retention mandates with European fundamental rights, the European Commission is taking major steps towards the re-introduction of EU-wide data retention mandates. Recently, the Commission launched a Call for Evidence on data retention for criminal investigations—the first formal step towards a legislative proposal.

    The European Commission and EU Member States have been attempting to revive data retention for years. For this purpose, a secretive “High Level Group on Access to Data for Effective Law Enforcement” has been formed, usually referred to as High level Group (HLG) “Going dark”. Going dark refers to the false narrative that law enforcement authorities are left “in the dark” due to a lack of accessible data, despite the ever increasing collection and accessing of data through companies, data brokers and governments. Going dark also describes the intransparent ways of working of the HLG, behind closed doors and without input from civil society.

    The Groups’ recommendations to the European Commission, published in 2024, read like a wishlist of government surveillance.They include suggestions to backdoors in various technologies (reframed as “lawful access by design”), obligations on service providers to collect and retain more user data than they need for providing their services, and intercepting and providing decrypted data to law enforcement in real time, all the while avoiding to compromise the security of their systems. And of course, the HLG calls for a harmonized data retention regime, including not only the retention of but also the access to data, and extending data retention to any service provider that could provide access to data.

    EFF joined other civil society organizations in addressing the dangerous proposals of the HLG, calling on the European Commission to safeguard fundamental rights and ensuring the security and confidentiality of communication.

    In our response to the Commission's Call for Evidence, we reiterated the same principles. 

    • Any future legislative measures must prioritize the protection of fundamental rights and must be aligned with the extensive jurisprudence of the Court of Justice of the European Union. 
    • General and indiscriminate data retention mandates undermine anonymity and privacy, which are essential for democratic societies, and pose significant cybersecurity risks by creating centralized troves of sensitive metadata that are attractive targets for malicious actors. 
    • We highlight the lack of empirical evidence to justify blanket data retention and warn against extending retention duties to number-independent interpersonal communication services as it would violate EU Court of Justice doctrine, conflict with European data protection law, and compromise security.

    The European Commission must once and for all abandon the ghost of data retention that’s been haunting EU policy discussions for decades, and shift its focus to rights respecting alternatives.

    Read EFF’s full submission here.

    Svea Windwehr

    Protect Yourself From Meta’s Latest Attack on Privacy

    1 week 6 days ago

    Researchers recently caught Meta using an egregious new tracking technique to spy on you. Exploiting a technical loophole, the company was able to have their apps snoop on users’ web browsing. This tracking technique stands out for its flagrant disregard of core security protections built into phones and browsers. The episode is yet another reason to distrust Meta, block web tracking, and end surveillance advertising. 

    Fortunately, there are steps that you, your browser, and your government can take to fight online tracking. 

    What Makes Meta’s New Tracking Technique So Problematic?

    More than 10 years ago, Meta introduced a snippet of code called the “Meta pixel,” which has since been embedded on about 20% of the most trafficked websites. This pixel exists to spy on you, recording how visitors use a website and respond to ads, and siphoning potentially sensitive info like financial information from tax filing websites and medical information from hospital websites, all in service of the company’s creepy system of surveillance-based advertising. 

    While these pixels are well-known, and can be blocked by tools like EFF’s Privacy Badger, researchers discovered another way these pixels were being used to track you. 

    Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified

    Meta’s tracking pixel was secretly communicating with Meta’s apps on Android devices. This violates a fundamental security feature (“sandboxing”) of mobile operating systems that prevents apps from communicating with each other. Meta got around this restriction by exploiting localhost, a feature meant for developer testing. This allowed Meta to create a hidden channel between mobile browser apps and its own apps. You can read more about the technical details here.

    This workaround helped Meta bypass user privacy protections and attempts at anonymity. Typically, Meta tries to link data from “anonymous” website visitors to individual Meta accounts using signals like IP addresses and cookies. But Meta made re-identification trivial with this new tracking technique by sending information directly from its pixel to Meta's apps, where users are already logged in. Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified with this tracking technique.  

    Meta didn’t just hide this tracking technique from users. Developers who embedded Meta’s tracking pixels on their websites were also kept in the dark. Some developers noticed the pixel contacting localhost from their websites, but got no explanation when they raised concerns to Meta. Once publicly exposed, Meta immediately paused this tracking technique. They claimed they were in discussions with Google about “a potential miscommunication regarding the application of their policies.”

    While the researchers only observed the practice on Android devices, similar exploits may be possible on iPhones as well.

    This exploit underscores the unique privacy risks we face when Big Tech can leverage out of control online tracking to profit from our personal data.

    How Can You Protect Yourself?

    Meta seems to have stopped using this technique for now, but that doesn’t mean they’re done inventing new ways to track you. Here are a few steps you can take to protect yourself:

    Use a Privacy-Focused Browser

    Choose a browser with better default privacy protections than Chrome. For example, Brave and DuckDuckGo protected users from this tracking technique because they block Meta’s tracking pixel by default. Firefox only partially blocked the new tracking technique with its default settings, but fully blocked it for users with “Enhanced Tracking Protection” set to “Strict.” 

    It’s also a good idea to avoid using in-app browsers. When you open links inside the Facebook or Instagram apps, Meta can track you more easily than if you opened the same links in an external browser.

    Delete Unnecessary Apps

    Reduce the number of ways your information can leak by deleting apps you don’t trust or don’t regularly use. Try opting for websites over apps when possible. In this case, and many similar cases, using the Facebook and Instagram website instead of the apps would have limited data collection. Even though both can contain tracking code, apps can access information that websites generally can’t, like a persistent “advertising ID” that companies use to track you (follow EFF’s instructions to turn it off if you haven’t already). 

    Install Privacy Badger

    EFF’s free browser extension blocks trackers to stop companies from spying on you online. Although Privacy Badger would’ve stopped Meta’s latest tracking technique by blocking their pixel, Firefox for Android is the only mobile browser it currently supports. You can install Privacy Badger on Chrome, Firefox, and Edge on your desktop computer. 

    Limit Meta’s Use of Your Data

    Meta’s business model creates an incentive to collect as much information as possible about people to sell targeted ads. Short of deleting your accounts, you have a number of options to limit tracking and how the company uses your data.

    How Should Google Chrome Respond?

    After learning about Meta’s latest tracking technique, Chrome and Firefox released fixes for the technical loopholes that Meta exploited. That’s an important step, but Meta’s deliberate attempt to bypass browsers’ privacy protections shows why browsers should do more to protect users from online trackers. 

    Unfortunately, the most popular browser, Google Chrome, is also the worst for your privacy. Privacy Badger can help by blocking trackers on desktop Chrome, but Chrome for Android doesn’t support browser extensions. That seems to be Google’s choice, rather than a technical limitation. Given the lack of privacy protections they offer, Chrome should support extensions on Android to let users protect themselves. 

    Although Chrome addressed the latest Meta exploit after it was exposed, their refusal to block third-party cookies or known trackers leaves the door wide open for Meta’s other creepy tracking techniques. Even when browsers block third-party cookies, allowing trackers to load at all gives them other ways to harvest and de-anonymize users’ data. Chrome should protect its users by blocking known trackers (including Google’s). Tracker-blocking features in Safari and Firefox show that similar protections are possible and long overdue in Chrome. It has yet to be approved to ship in Chrome, but a Google proposal to block fingerprinting scripts in Incognito Mode is a promising start. 

    Yet Another Reason to Ban Online Behavioral Advertising

    Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. Even if this method has been paused, as long as they have the incentive to do so Meta will keep finding ways to bypass your privacy protections. 


    The best way to stop this cycle of invasive tracking techniques and patchwork fixes is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. We need strong federal privacy laws to ensure that you, not Meta, control what information you share online.

    Lena Cohen
    Checked
    38 minutes 51 seconds ago
    EFF's Deeplinks Blog: Noteworthy news from around the internet
    Subscribe to EFF update feed