Podcast Episode: Smashing the Tech Oligarchy

3 weeks 6 days ago

Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fe4b50178-f872-4b2c-9015-cec3a88bc5de%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Kara Swisher has been documenting the internet’s titans for almost 30 years through a variety of media outlets and podcasts. She believes that with adequate regulation we can keep people safe online without stifling innovation, and we can have an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs. 

In this episode you’ll learn about:

  • Why it’s so important that tech workers speak out about issues they want to improve and work to create companies that elevate best practices
  • Why completely unconstrained capitalism turns technology into weapons instead of tools
  • How antitrust legislation and enforcement can create a healthier online ecosystem
  • Why AI could either bring abundance for many or make the very rich even richer
  • The small online media outlets still doing groundbreaking independent reporting that challenges the tech oligarchy 

Kara Swisher is one of the world's foremost tech journalists and critics, and currently hosts two podcasts: On with Kara Swisher and Pivot, the latter co-hosted by New York University Professor Scott Galloway.  She's been covering the tech industry since the 1990s for outlets including the Washington Post, the Wall Street Journal, and the New York Times; she is an New York Magazine editor-at-large, a CNN contributor, and cofounder of the tech news sites Recode and All Things Digital. She also has authored several books, including “Burn Book” (Simon & Schuster, 2024) in which she documents the history of Silicon Valley and the tech billionaires who run it. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

KARA SWISHER: It's a tech that's not controlled by a small group of homogeneous people. I think that's pretty much it. I mean, and there's adequate regulation to allow for people to be safe and at the same time, not too much in order to be innovative and do things – you don't want the government deciding everything.
It's a place where the internet, which was started by US taxpayers, which was paid for, is beneficial for people, and that there's transparency in it, and that we can see what's happening and what's doing. And again, the concentration of power in the hands of a few people really is at the center of the problem.

CINDY COHN: That's Kara Swisher, describing the balance she'd like to see in a better digital future. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation

JASON KELLEY: And I'm Jason Kelley -- EFF's Activism Director. You're listening to How to Fix the Internet.

CINDY COHN: This show is about envisioning a better digital future that we can all work towards.

JASON KELLEY: And we are excited to have a guest who has been outspoken in talking about how we get there, pointing out the good, the bad and the ugly sides of the tech world.

CINDY COHN: Kara Swisher is one of the world's foremost tech journalists and critics. She's been covering the industry since the 1990s, and she currently hosts two podcasts: On with Kara Swisher and Pivot, and she's written several books, including last year's Burn Book where she documents the history of Silicon Valley and the tech billionaires who run it.
We are delighted that she's here. Welcome, Kara.

KARA SWISHER: Thank you.

CINDY COHN: We've had a couple of tech critics on the podcast recently, and one of the kind of themes that's come up for us is you kind of have to love the internet before you can hate on it. And I've heard you describe your journey that way as well. And I'd love for you to talk a little bit about it, because you didn't start off, really, looking for all the ways that things have gone wrong.

KARA SWISHER: I don't hate it. I don't. It's just, you know, I have eyes and I can see, you know, I mean, uh, one of the expressions I always use is you should, um, believe what you see, not see what you believe. And so I always just, that's what's happening. You can see it happening. You can see the coarsening of our dialogue now offline being affected by online. You could just see what's happened.
But I still love the the possibilities of technology and the promise of it. And I think that's what attracted me to it in the first place, and it's a question of how you use it as a tool or a weapon. And so I always look at it as a tool and some people have taken a lot of these technologies and use them as a weapon.

CINDY COHN: So what was that moment? Did you, do you have a moment when you decided you were really interested in tech and that you really found it to be important and worth devoting your time to?

KARA SWISHER: I was always interested in it because I had studied propaganda and the uses of TV and radio and stuff. So I was always interested in media, and this was the media on steroids. And so I recall downloading an entire book onto my computer and I thought, oh, look at this. Everything is digital. And so the premise that I came to at the time, or the idea I came to was that everything that can be digitized would be digitized, and that was a huge idea because that means entire industries would change.

CINDY COHN: Yeah.

JASON KELLEY: Kara, you started by talking about this concentration of power, which is obvious to anyone who's been paying attention, and at the same time, you know, we did use to have tech leaders who, I think, they had less power. It was less concentrated, but also people were more focused, I think, on solving real problems.
You know, you talk a lot about Steve Jobs. There was a goal of improving people's lives with technology, that that didn't necessarily it, it helped the bottom line, but the focus wasn't just on quarterly profits. And I wonder if you can talk a little bit about what you think it would look like if we returned to that in some way. Is that gone?

KARA SWISHER: I don't think we were there. I think they were always focused on quarterly profits. I think that was a canard. I wrote about it, that they would pretend that they were here to help. You know, it's sort of like the Twilight Zone episode To Serve Man. It's a cookbook. I always thought it was a cookbook for these people.
And they were always formulated in terms of making money and maximizing value for their shareholders, which was usually themselves. I wasn't stupid. I understood what they were doing, especially when these stocks went to the moon, especially the early internet days and their first boom. And they became instant, instant-airs, I think they were called that, which was instant millionaires and, and then now beyond that.
And so I was always aware of the money, even if they pretended they weren't, they were absolutely aware And so I don't have a romantic version of this at the beginning, um, except among a small group of people, you know, who, who, who were seeing it, like the Whole Earth Catalog and things like that, which we're looking at it as a way to bring everybody together or to spread knowledge throughout the world, which I also believed in too.

JASON KELLEY: Do you think any of those people are still around?

KARA SWISHER: No, they’re dead.

JASON KELLEY: I mean, literally, you know, they're literally dead, but are there any heirs of theirs?

KARA SWISHER: No, I mean, I don't think they had any power. I don't, I think that some of the theoretical stuff was about that, but no, they didn't have any power. The people that had power were the, the Mark Zuckerbergs, the Googles, and even, you know, the Microsofts, I mean, Bill Gates is kind of the exemplification of all that. As he, he took other people's ideas and he made it into an incredibly powerful company and everybody else sort of followed suit.

JASON KELLEY: And so mostly for you, the concentration of power is the biggest shift that's happened and you see regulation or, you know, anti-competitive moves as ways to get us back.

KARA SWISHER: We don't have any, like, if we had any laws, that would be great, but we don't have any that, that constrain them. And now under President Trump, there's not gonna be any rules around AI, probably. There aren't gonna be any rules around any significant rules, at least around any of it.
So they, the first period, which was the growth of where we are now, was not constrained in any way, and now it's not just not constrained, but it's helping whether it's cryptocurrency or things like that. And so I don't feel like there's any restrictions, like at this point, in fact, there's encouragement by government to do whatever you want.

CINDY COHN: I think that's a really big worry. And you know, I think you're aware, as are we, that, you know, just because somebody comes in and says they're gonna do something about a problem with legislation doesn't mean that they're, they're actually having that. And I think sometimes we feel like we sit in this space where we're like, we agree with you on the harm, but this thing you wanna do is a terrible idea and trying to get the means and the ends connected is kind of a lot of where we live sometimes, and I think you've seen that as well, that like once you've articulated the harm, that's kind of the start of the journey about whether the thing that you're talking about doing will actually meet that moment.

KARA SWISHER: Absolutely. The harms, they don't care about, that's the issue. And I think I was always cognizant of the harms, and that can make you seem like, you know, a killjoy of some sort. But it's not, it's just saying, wow, if you're gonna do this social media, you better pay attention to this or that.
They acted like the regular problems that people had didn't exist in the world, like racism, you know, sexism. They said, oh, that can be fixed, and they never offered any solutions, and then they created tools that made it worse.

CINDY COHN: I feel like the people who thought that we could really use technology to build a better world, I, I don't think they were wrong or naive. I just think they got stomped on by the money. Um, and, you know, uh.

KARA SWISHER: Which inevitably happens.

CINDY COHN: It does. And the question is, how do you squeeze out something, you know, given that this is the dynamic of capitalism, how do you squeeze out space for protecting people?
And we've had times in our society when we've done that better, and we've done that worse. And I feel like there are ways in which this is as bad as has gotten in my lifetime. You know, with the government actually coming in really strongly on the side of, empowering the powerful and disempowering the disempowered.
I see competition as a way to do this. EFF was, you know, it was primarily an organization focused on free speech and privacy, but we kind of backed into talking about competition 'cause we felt like we couldn't get at any of those problems unless we talked about the elephant in the room.
And I think you think about it, really on the individual, you know, you know all these guys, and on that very individual level of what, what kinds of things will, um, impact them.
And I'm wondering if you have some thoughts about the kinds of rules or regulations that might actually, you know, have an impact and not, not turn into, you know, yet another cudgel that they get to wield.

KARA SWISHER: Well any, any would be good. Like I don't, I don't, there isn't any, there isn't any you could speak of that's really problematic for them, except for the courts which are suing over antitrust issues or some regulatory agencies. But in general, what they've done is created an easy glide path for themselves.
I mean, we don't have a national privacy regulation. We don't have algorithmic transparency bills. We don't have data protection really, and to speak of for people. We don't have, you know, transparency into the data they collect. You know, we have more rules and laws on airplanes and cigarettes and everybody else, but we don't have any here. So you know, antitrust is a whole nother area of, of changing, of our antitrust rules. So these are all areas that have to be looked at. But we haven't, they haven't, they haven't passed a thing. I mean, lots of legislators have tried, but, um, it hasn't worked really.

CINDY COHN: You know, a lot of our supporters are people who work in tech but aren't necessarily the. You know, the tech giants, they're not the tops of these companies, but they work in the companies.
And one of the things that I, you know, I don't know if you have any insights if you've thought about this, but we speak with them a lot and they're dismayed at what's going on, but they kind of feel powerless. And I'm wondering if you have thoughts like, you know, speaking to the people who aren't, who aren't the Elons and the, the guys at the top, but who are there, and who I think are critical to keeping these companies going. Are there ways that they can make their voices heard that you've thought of that would, that might work? I guess I, I'm, I'm pulling on your insight because you know the actual people.

KARA SWISHER: Yeah, you know, speak out. Just speak out. You know, everybody gets a voice these days and there's all kinds of voices that never would've gotten heard and to, you know, talk to legislators, involve customers, um, create businesses where you do those good practices. Like that's the best way to do it is create wealth and capitalism and then use best practices there. That to me is the best way to do that.

CINDY COHN: Are there any companies that you look at from where you sit that you think are doing a pretty good job or at least trying? I don't know if you wanna call anybody out, but, um, you know, we see a few, um, and I kind of feel like all the air gets sucked out of the room.

KARA SWISHER: In bits and pieces. In bits and pieces, you know, Apple's good on the privacy thing, but then it's bad on a bunch of other things. Like you could, like, you, you, the problem is, you know, these are shareholder driven companies and so they're gonna do what's best for them and they could, uh, you know, wave over to privacy or wave over to, you know, more diversity, but they really are interested in making money.
And so I think the difficulty is figuring out, you know, do they have duties as citizens or do they just have duties as corporate citizens? And so that's always been a difficult thing in our society and will continue to be.

CINDY COHN: Yeah.

JASON KELLEY: We've always at EFF really stood up for the user in, in this way where sometimes we're praising a company that normally people are upset with because they did a good thing, right? Apple is good on privacy. When they do good privacy things we say, that's great. You know, and if Apple makes mistakes, we say that too.
And it feels like, um, you know, we're in the middle of, I guess, a “tech lash.” I don't know when it started. I don't know if it'll ever end. I don't know if there's, if that's even a real term in terms of like, you know, tech journalism. But do you find that it's difficult? Two, get people to accept sort of like any positive praise for companies that are often just at this point, completely easy to ridicule for all the mistakes they've made.

KARA SWISHER: I think the tech journalism has gotten really strong. It's gotten, I mean, just look at the DOGE coverage. I think it really, I'll point to WIRED as a good example, as they've done astonishing stuff. I think a lot of people have done a lot on, on, uh, you know, the abuses of social media. I think they've covered a lot of issues from the overuse of technology to, you know, all the crypto stuff. It doesn't mean people follow along, but they've certainly been there and revealed a lot of the flaws there. Um, while also covering it as like, this is what's happening with ai. Like this is what's happening, here's where it's going. And so you have to cover as a thing. Like, this is what's being developed. but then there's, uh, others, you know, who have to look into the real problems.

JASON KELLEY: I get a lot of news from 404 Media, right?

KARA SWISHER: Yeah, they’re great.

JASON KELLEY: That sort of model is relatively new and it sort of sits against some of these legacy models. Do you see, like, a growing role for things like that in a future?

KARA SWISHER: There's lots of different things. I mean, I came from like, as you mean, part of the time, although I got away from it pretty quickly, but some of 'em are doing great. It just depends on the story, right? Some of the stories are great, like. Uh, you know, uh, there's a ton of people at the Times have done great stuff on, on, on lots of things around kids and abuses and social media.
At the same time, there's all these really exciting young, not necessarily young, actually, um, independent media companies, whether it's Casey Newton, at Platformer, or Eric Newcomer covering VCs, or 404. There's all these really interesting new stuff. That's doing really well. WIRED is another one that's really seen a lot of bounce back under its current editor who just came on relatively recently.
So it just depends. It depends on where it is, but there's, Verge does a great job. But I think it's individually the stories in, there's no like big name in this area. There's just a lot of people and then there's all these really interesting experts or people who work in tech who've written a lot. That is always very interesting too, to me. It's interesting to hear from insiders what they think is happening.

CINDY COHN: Well, I'm happy to hear this, this optimism. 'Cause I worry a lot about, you know, the way that the business model for media has really been hollowed out. And then seeing things like, you know, uh, some of the big broadcast news people folding,

KARA SWISHER: Yeah, but broadcast never did journalism for tech, come on. Like, some did, I mean, one or two, but it wasn't them who was doing it. It was usually, you know, either the New York Times or these smaller institutions have been doing a great job. There's just been tons and tons of different things, completely different things.

JASON KELLEY: What do you think about the fear, maybe I'm, I'm misplacing it, maybe it's not as real as I imagine it is. Um, that results from something like a Gawker situation, right. You know, you have wealthy people.

KARA SWISHER: That was a long time ago.

JASON KELLEY: It was, but it, you know, a precedent was sort of set, right? I mean, do you think people in working in tech journalism can take aim at, you know, individual people that have a lot of power and wealth in, in the same way that they could before?

KARA SWISHER: Yeah. I think they can, if they're accurate. Yeah, absolutely.

CINDY COHN: Yeah, I think you're a good exhibit A for that, you pull no punches and things are okay. I mean, we get asked sometimes, um, you know, are, are you ever under attack because of your, your sharp advocacy? And I kind of think your sharp advocacy protects you as long as you're right. And I think of you as somebody who's also in, in a bit of that position.

KARA SWISHER: Mmhm.

CINDY COHN: You may say this is inevitable, but I I wanted to ask you, you know, I feel like when I talk with young technical people, um, they've kind of been poisoned by this idea that the only way you can be successful is, is if you're an asshole.
That there's no, there's no model, um, that, that just just goes to the deal. So if they want to be successful, they have to be just an awful person. And so even if they might have thought differently beforehand, that's what they think they have to do. And I'm wondering if you run into this as well, and I sometimes find myself trying to think about, you know, alternate role models for technical people and if you have any that you think of.

KARA SWISHER: Alternate role models? It's mostly men. But there are, there's all kinds of, like, I just did an interview with Lisa Su, who's head of AMD, one of the few women CEOs. And in AI, there's a number of women, uh, you know, you don't necessarily have to have diversity to make it better, but it sure helps, right? Because people have a different, not just diversity of gender or diversity of race, but diversity of backgrounds, politics. You know, the more diverse you are, the better products you make, essentially. That's my always been my feeling.
Look, most of these companies are the same as it ever was, and in fact, there's fewer different people running them, essentially. Um, but you know, that's always been the nature of, of tech essentially, that it was sort of a, a man's world.

CINDY COHN: Yeah, I see that as well. I just worry that young people or junior people coming up think that the only way that you can be successful is a, if you look like the guys who are already successful, but also, you know, if you're just kind of not, you know, if you're weird and not nice.

KARA SWISHER: It's just depends on the person. It's just that when you get that wealthy, you have a lot of people licking you up and down all day, and so you end up in the crazy zone like Elon Musk, or the arrogant zone like Mark Zuckerberg or whatever. It's just they don't get a lot of pushback and when you don't get a lot of friction, you tend to think everything you do is correct.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights. And EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 10th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Have a listen to this: [WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Kara Swisher.

CINDY COHN: I mean, you watched all these tech giants kind of move over to the Trump side and then, you know, stand there on the inauguration. It sounds like you thought that might've been inevitable.

KARA SWISHER: I said it was inevitable, they were all surprised. They're always surprised when I'm like, Elon's gonna crack up with the president. Oh look, they cracked up with, it's not hard to follow these people. In his case, he's, he's personally, there's something wrong with his head, obviously. He always cracks up with people. So that's what happened here.
In that case, they just wanted things. They want things. You think they liked Donald Trump? You’re wrong there? I'll tell you. They don't like him. They need him. They wanna use him and they were irritated by Biden 'cause he presumed to push back on and he didn't do a very good job of it, honestly. But they definitely want things.

CINDY COHN: I think the tech industry came up at a time when deregulation was all the rage, right? So in some ways they were kind of born into a world where regulation was an anathema and they took full advantage of the situation.
As did lots of other areas that got deregulated or were not regulated in the first place. But I think tech, because of timing in some ways, tech was really born into this zone. And there was some good things for it too. I mean, you know, EFF was, was successful in the nineties at making sure that the internet got first Amendment protection, that we didn't, go to the other side with things like the Communications Decency Act and squelch any adult material from being put online and reduce everything to the side. But getting that right and kind of walking through the middle ground where you have regulation that supports people but doesn't squelch them is just an ongoing struggle,

KARA SWISHER: Mm-hmm. Absolutely.

JASON KELLEY: I have this optimistic hope that these companies and their owners sort of crumble as they continue to, as Cory Doctorow says, enshittify, right? The only reason they don't crumble is that they have this lock in with users. They have this monopoly power, but you see a, you know, a TikTok pops up and suddenly Instagram has a real competitor, not because rules have been put in place to change Instagram, but because a different, new maybe better platform.

KARA SWISHER: There’s nothing like competition, making things better. Right? Competition always helps.

JASON KELLEY: Yeah, when I think of competition law, I think of crushing companies, I think of breaking them up. But what do you think we can do to make this sort of world better and more fertile for new companies? You know, you talked earlier about tech workers.

KARA SWISHER: Well, you have to pass those things where they don't get to. Antitrust is the best way to do that. Right? And, but those things move really slowly, unfortunately. And, you know, good antitrust legislation and antitrust enforcement, that's happening right now. But it opens up, I mean, the reason Google exists is 'cause of the antitrust actions around Microsoft.
And so we have to like continue to press on things like that and continue to have regulators that are allowed to pursue cases like that. And then at the same time have a real focus on creating wealth. We wanna create wealth, we wanna create, we wanna give people breaks.
We wanna have the government involved in funding some of these things, making it so that small companies don't get run over by larger companies.
Not letting power concentrate into a small group of people. When that happens, that's what happens. You end up with less companies. They kill them in the crib, these companies. And so not letting things get bought, have a scrutiny over things, stuff like that.

CINDY COHN: Yeah, I think a lot more merger review makes a lot of sense. I think a lot of thinking about, how are companies crushing each other and what are the things that we can do to try to stop that? Obviously we care a lot about interoperability, making sure that technologies that, that have you as a customer don't get to lock you in, and make it so that you're just stuck with their broken business model and can do other things.
There's a lot of space for that kind of thing. I mean, you know, I always tell the story, I'm sure you know this, that, you know, if it weren't for the FCC telling AT&T that they had to let people plug something other than phones into the wall, we wouldn't have had the internet, you know, the home internet revolution anyway.

KARA SWISHER: Right. Absolutely. 100%.

CINDY COHN: Yeah, so I think we are in agreement with you that, you know, competition is really central, but it's, you know, it's kind of an all of the above and certainly around privacy issues. We can do a lot around this business model. Which I think is driving so many of the other bad things that we are seeing, um, with some comprehensive privacy law.
But boy, it sure feels like right now, you know, we got two branches of government that are not on board with that. And the third one kind of doing okay, but not, you know, and the courts were doing okay, but slowly and inconsistently. Um, where do you see hope? Where are you, where are you looking for the for

KARA SWISHER: I mean, some of this stuff around AI could be really great for humanity, or it could be great for a small amount of people. That's really, you know, which one do we want? Do we want this technology to be a tool or a weapon against us? Do we want it to be in the hands of bigger companies or in the hands of all of us and we make decisions around it?
Will it help us be safer? Will it help us cure cancer or is it gonna just make a rich person a billion dollars richer? I mean, it's the age old story, isn't it? This is not a new theme in America where, the rich get richer and the poor get less. And so these, these technologies could, as you know, recently out a book all abundance.
It could create lots of abundance. It could create lots of interesting new jobs, or it could just put people outta work and let the, let the people who are richer get richer. And I don't think that's a society we wanna have. And years ago I was talking about income inequality with a really wealthy person and I said, you either have to do something about, you know, the fact that people, that we didn't have a $25 minimum wage, which I think would help a lot, lots of innovation would come from that. If people made more money, they'd have a little more choices. And it's worth the investment in people to do that.
And I said, we have to either deal with income inequality or armor plate your Tesla. Tesla. And I think he wanted to armor plate his Tesla. That's when ire, and then of course, cyber truck comes out. So there you have it. But, um, I think they don't care about that kind of stuff. You know, they're happy to create their little, we, those little worlds where they're highly protected, but it's not a world I wanna live in.

CINDY COHN: Kara, thank you so much. We really appreciate you coming in. I think you sit in such a different place in the world than where we sit, and it's always great to get your perspective.

KARA SWISHER: Absolutely. Anytime. You guys do amazing work and you know you're doing amazing work and you should always keep a watch on these people. It's not, you shouldn't be against everything. 'cause some people are right. But you certainly should keep a watch on people

CINDY COHN: Well, great. We, we sure will.

JASON KELLEY: up. Yeah, we'll keep doing it. Thank you,

CINDY COHN: Thank you.

KARA SWISHER: All right. Thank you so much.

CINDY COHN: Well, I always appreciate how Kara gets right to the point about how the concentration of power among a few tech moguls has led to so many of the problems we face online and how competition. Along with some things, we so often hear about real laws requiring transparency, privacy protections, and data protections can help shift the tide.

JASON KELLEY: Yeah, you know, some of these fixes are things that people have been talking about for a long time and I think we're at a point where everyone agrees on a big chunk of them. You know, especially the ones that we promote like competition and transparency oftentimes, and privacy. So it's great to hear that Kara, who's someone that, you know, has worked on this issue and in tech for a long time and thought about it and loves it, as she said, you know, agrees with us on some of the, some of the most important solutions.

CINDY COHN: Sometimes these criticisms of the tech moguls can feel like something everybody does, but I think it's important to remember that Kara was really one of the first ones to start pointing this out. And I also agree with you, you know, she's a person who comes from the position of really loving tech. And Kara's even a very strong capitalist. She really loves making money as well. You know, her criticism comes from a place of betrayal, that, again, like Molly White, earlier this season, kind of comes from a position of, you know, seeing the possibilities and loving the possibilities, and then seeing how horribly things are really going in the wrong direction.

JASON KELLEY: Yeah, she has this framing of, is it a tool or a weapon? And it feels like a lot of the tools that she loved became weapons, which I think is how a lot of us feel. You know, it's not always clear how to draw that line. But it's obviously a good question that people, you know, working in the tech field, and I think people even using technology should ask themselves, when you're really enmeshed with it, is the thing you're using or building or promoting, is it working for everyone?
You know, what are the chances, how could it become a weapon? You know, this beautiful tool that you're loving and you have all these good ideas and, you know, ideas that, that it'll change the world and improve it. There's always a way that it can become a weapon. So I think it's an important question to ask and, and an important question that people, you know, working in the field need to ask.

CINDY COHN: Yeah. And I think that, you know, that's the gem of her advice to tech workers. You know, find a way to make your voice heard if you see this happening. And there's a power in that. I do think that one thing that's still true in Silicon Valley is they compete for top talent.
And, you know, top talent indicating that they're gonna make choices based on some values is one of the levers of power. Now I don't think anybody thinks that's the only one. This isn't an individual responsibility question. We need laws, we need structures. You know, we need some structural changes in antitrust law and elsewhere in order to make that happen. It's not all on the shoulders of the tech workers, but I appreciate that she really did say, you know, there's a role to be played here. You're not just pawns in this game.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program for Public Understanding of Science and Technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Josh Richman

Ryanair’s CFAA Claim Against Booking.com Has Nothing To Do with Actual Hacking

4 weeks ago

The Computer Fraud and Abuse Act (CFAA) is supposed to be about attacks on computer systems. It is not, as a federal district court suggested in Ryanair v. Booking.com, applicable when someone uses valid login credentials to access information to which those credentials provide access. Now that the case is on appeal, EFF has filed an amicus brief asking the Third Circuit to clarify that this case is about violations to policy, not hacking, and does not qualify as access “without authorization” under CFAA.

The case concerns transparency in airfare pricing. Ryanair complained that Booking republished Ryanair’s prices, some of which were only visible when a user logged in. Ryanair sent a cease and desist to Booking, but didn't deactivate the usernames and passwords associated with the uses they disliked. When the users allegedly connected to Booking kept using those credentials to gather pricing data, Ryanair claimed it was a CFAA violation. If this doesn’t sound like “computer hacking” to you, you’re right.

The CFAA has proven bad for research, security, competition, and innovation. For years we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Ryanair’s claims here: what amounts to a terms of use violation because the information that was accessed is available to anyone with login credentials. This is the course charted Van Buren v. United States, where the Supreme Court explained that “authorization” refers to technical concepts of computer authentication. As we stated in our brief:

The CFAA does not apply to every person who merely violates terms of service by sharing account credentials with a family member or by withholding sensitive information like one’s real name and birthdate when making an account.

Building on the good decisions in Van Buren and the Ninth Circuit’s ruling in hiQ Labs v. LinkedIn, we weighed in at the Third Circuit urging the court to hold clearly that triggering a CFAA violation requires bypassing a technology that restricts access. In this case, the login credentials that were created were legit access. But the rule adopted by the lower court would criminize many everyday behaviors, like logging into a streaming service account with a partner’s login, or logging into a spouse’s bank account to pay a bill at their behest. This is not hacking or a violation of the CFAA, it’s just violating a company’s wish list in its Terms of Service.

This rule would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may make different accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, the company could not just ban the researchers from using the site, it could render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

Many other examples and common research techniques used by journalists, academic researchers, and security researchers would be at risk under this rule, but the end result would be the same no matter what: it would chill valuable research that keeps us all safer online.

A broad reading of CFAA in this case would also undermine competition by providing a way for companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

Courts must follow Van Buren’s lead and interpret the CFAA as narrowly as it was designed. Logging into a public website with valid credentials, even if you scrape the data once you’re logged in, is not hacking. A broad reading leads to unintended consequences, and website owners do not need new shields against independent accountability.

You can read our amicus brief here.

Thorin Klosowski

You Went to a Drag Show—Now the State of Florida Wants Your Name

4 weeks 1 day ago

If you thought going to a Pride event or drag show was just another night out, think again. If you were in Florida, it might land your name in a government database.

That’s what’s happening in Vero Beach, FL, where the Florida Attorney General’s office has subpoenaed a local restaurant, The Kilted Mermaid, demanding surveillance video, guest lists, reservation logs, and contracts of performers and other staff—all because the venue hosted an LGBTQ+ Pride event.

To be clear: no one has been charged with a crime, and the law Florida is likely leaning on here—the so-called “Protection of Children Act” (which was designed to be a drag show ban)—has already been blocked by federal courts as likely unconstitutional. But that didn’t stop Attorney General James Uthmeier from pushing forward anyway. Without naming a specific law that was violated, the AG’s press release used pointed and accusatory language, stating that "In Florida, we don't sacrifice the innocence of children for the perversions of some demented adults.” His office is now fishing for personal data about everyone who attended or performed at the event. This should set off every civil liberties alarm bell we have.

Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy.

Drag shows—many of which are family-friendly and feature no sexual content—have become a political scapegoat. And while that rhetoric might resonate in some media environments, the real-world consequences are much darker: state surveillance of private citizens doing nothing but attending a fun community celebration. By demanding video surveillance, guest lists, and reservation logs, the state isn’t investigating a crime, it is trying to scare individuals from attending a legal gathering. These are people who showed up at a public venue for a legal event, while a law restricting it was not even in effect. 

The Supreme Court has ruled multiple times that subpoenas forcing disclosure of members of peaceful organizations have a chilling effect on free expression. Whether it’s a civil rights protest, a church service, or, yes, a drag show: the First Amendment protects the confidentiality of lists of attendees.

Even if the courts strike down this subpoena—and they should—the damage will already be done. A restaurant owner (who also happens to be the town’s vice mayor) is being dragged into a state investigation. Performers’ identities are potentially being exposed—whether to state surveillance, inclusion in law enforcement databases, or future targeting by anti-LGBTQ+ groups. Guests who thought they were attending a fun community event are now caught up in a legal probe. These are the kinds of chilling, damaging consequences that will discourage Floridians from hosting or attending drag shows, and could stamp out the art form entirely. 

EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database.

Rindala Alajaji

Just Banning Minors From Social Media Is Not Protecting Them

4 weeks 1 day ago

By publishing its guidelines under Article 28 of the Digital Services Act, the European Commission has taken a major step towards social media bans that will undermine privacy, expression, and participation rights for young people that are already enshrined in international human rights law. 

EFF recently submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Online safety for young people must include privacy and security for them and must not come at the expense of freedom of expression and equitable access to digital spaces.

Article 28 requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. But the article also prohibits targeting minors with personalized ads, a measure that would seem to require that platforms know that a user is a minor. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy and requiring platforms to know the age of every user. The DSA does not resolve this tension. Rather, it states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. 

Thus, the question of age checks is a key to understanding the obligations of online platforms to safeguard minors online. Our submission explained the serious concerns that age checks pose to the rights and security of minors. All methods for conducting age checks come with serious drawbacks. Approaches to verify a user’s age generally involve some form of government-issued ID document, which millions of people in Europe—including migrants, members of marginalized groups and unhoused people, exchange students, refugees and tourists—may not have access to.

Other age assurance methods, like biometric age estimation, age estimation based on email addresses or user activity, involve the processing of vast amounts of personal, sensitive data – usually in the hands of third parties. Beyond being potentially exposed to discrimination and erroneous estimations, users are asked to trust platforms’ intransparent supply chains and hope for the best. Age assurance methods always impact the rights of children and teenagers: Their rights to privacy and data protection, free expression, information and participation.

The Commission's guidelines contain a wealth of measures elucidating the Commission's understanding of "age appropriate design" of online services. We have argued that some of them, including default settings to protect users’ privacy, effective content moderation and ensuring that recommender systems’ don’t rely on the collection of behavioral data, are practices that would benefit all users

But while the initial Commission draft document considered age checks as only a tool to determine users’ ages to be able to tailor their online experiences according to their age, the final guidelines go far beyond that. Crucially, the European Commission now seems to consider “measures restricting access based on age to be an effective means to ensure a high level of privacy, safety and security for minors on online platforms” (page 14). 

This is a surprising turn, as many in Brussels have considered social media bans like the one Australia passed (and still doesn’t know how to implement) disproportionate. Responding to mounting pressure from Member States like France, Denmark, and Greece to ban young people under a certain age from social media platforms, the guidelines contain an opening clause for national rules on age limits for certain services. According to the guidelines, the Commission considers such access restrictions  appropriate and proportionate where “union or national law, (...) prescribes a minimum age to access certain products or services (...), including specifically defined categories of online social media services”. This opens the door for different national laws introducing different age limits for services like social media platforms. 

It’s concerning that the Commission generally considers the use of age verification proportionate in any situation where a provider of an online platform identifies risks to minors’ privacy, safety, or security and those risks “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” (page 17). This view risks establishing a broad legal mandate for age verification measures.

It is clear that such bans will do little in the way of making the internet a safer space for young people. By banning a particularly vulnerable group of users from accessing platforms, the providers themselves are let off the hook: If it is enough for platforms like Instagram and TikTok to implement (comparatively cheap) age restriction tools, there are no incentives anymore to actually make their products and features safer for young people. Banning a certain user group changes nothing about problematic privacy practices, insufficient content moderation or business models based on the exploitation of people’s attention and data. And assuming that teenagers will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Svea Windwehr

Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy

1 month ago

In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.  

The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.

Zero Knowledge Proofs: The Good News

The biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.

Zero Knowledge Proofs: The Bad News

What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.

ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.

Protecting The Way Forward

Mandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.

Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.

Alexis Hancock

Canada’s Bill C-2 Opens the Floodgates to U.S. Surveillance

1 month ago

The Canadian government is preparing to give away Canadians’ digital lives—to U.S. police, to the Donald Trump administration, and possibly to foreign spy agencies.

Bill C-2, the so-called Strong Borders Act, is a sprawling surveillance bill with multiple privacy-invasive provisions. But the thrust is clear: it’s a roadmap to aligning Canadian surveillance with U.S. demands. 

It’s also a giveaway of Canadian constitutional rights in the name of “border security.” If passed, it will shatter privacy protections that Canadians have spent decades building. This will affect anyone using Canadian internet services, including email, cloud storage, VPNs, and messaging apps. 

A joint letter, signed by dozens of Canadian civil liberties groups and more than a hundred Canadian legal experts and academics, puts it clearly: Bill C-2 is “a multi-pronged assault on the basic human rights and freedoms Canada holds dear,” and “an enormous and unjustified expansion of power for police and CSIS to access the data, mail, and communication patterns of people across Canada.”

Setting The Stage For Cross-Border Surveillance 

Bill C-2 isn’t just a domestic surveillance bill. It’s a Trojan horse for U.S. law enforcement—quietly building the pipes to ship Canadians’ private data straight to Washington.

If Bill C-2 passes, Canadian police and spy agencies will be able to demand information about peoples’ online activities based on the low threshold of “reasonable suspicion.” Companies holding such information would have only five days to challenge an order, and blanket immunity from lawsuits if they hand over data. 

Police and CSIS, the Canadian intelligence service, will be able to find out whether you have an online account with any organization or service in Canada. They can demand to know how long you’ve had it, where you’ve logged in from, and which other services you’ve interacted with, with no warrant required.

The bill will also allow for the introduction of encryption backdoors. Forcing companies to surveil their customers is allowed under the law (see part 15), as long as these mandates don’t introduce a “systemic vulnerability”—a term the bill doesn’t even bother to define. 

The information gathered under these new powers is likely to be shared with the United States. Canada and the U.S. are currently negotiating a misguided agreement to share law enforcement information under the US CLOUD Act. 

The U.S. and U.K. put a CLOUD Act deal in place in 2020, and it hasn’t been good for users. Earlier this year, the U.K. home office ordered Apple to let it spy on users’ encrypted accounts. That security risk caused Apple to stop offering U.K. users certain advanced encryption features, and lawmakers and officials in the United States have raised concerns that the UK’s demands might have been designed to leverage its expanded CLOUD Act powers.

If Canada moves forward with Bill C-2 and a CLOUD Act deal, American law enforcement could demand data from Canadian tech companies in secrecy—no notice to users would be required. Companies could also expect gag orders preventing them from even mentioning they have been forced to share information with US agencies.

This isn’t speculation. Earlier this month, a Canadian government official told Politico that this surveillance regime would give Canadian police “the same kind of toolkit” that their U.S. counterparts have under the PATRIOT Act and FISA. The bill allows for “technical capability orders.” Those orders mean the government can force Canadian tech companies, VPNs, cloud providers, and app developers—regardless of where in the world they are based—to build surveillance tools into their products.

Under U.S. law, non-U.S. persons have little protection from foreign surveillance. If U.S. cops want information on abortion access, gender-affirming care, or political protests happening in Canada—they’re going to get it. The data-sharing won’t necessarily be limited to the U.S., either. There’s nothing to stop authoritarian states from demanding this new trove of Canadians’ private data that will be secretly doled out by its law enforcement agencies. 

EFF joins the Canadian Civil Liberties Association, OpenMedia, researchers at Citizen Lab, and dozens of other Canadian organizations and experts in asking the Canadian federal government to withdraw Bill C-2. 

Further reading:

  • Joint letter opposing Bill C-2, signed by the Canadian Civil Liberties Association, OpenMedia, and dozens of other Canadian groups 
  • CCLA blog calling for withdrawal of Bill C-2
  • The Citizen Lab (University of Toronto) report on Canadian CLOUD Act deal
  • The Citizen Lab report on Bill C-2
  • EFF one-pager and blog on problems with the CLOUD Act, published before the bill was made law in 2018
Joe Mullin

You Shouldn’t Have to Make Your Social Media Public to Get a Visa

1 month ago

The Trump administration is continuing its dangerous push to surveil and suppress foreign students’ social media activity. The State Department recently announced an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department also indicated that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.

The administration is penalizing prospective students and visitors for shielding their social media accounts from the general public or for choosing to not be active on social media. This is an outrageous violation of privacy, one that completely disregards the legitimate and often critical reasons why millions of people choose to lock down their social media profiles, share only limited information about themselves online, or not engage in social media at all. By making students abandon basic privacy hygiene as the price of admission to American universities, the administration is forcing applicants to expose a wealth of personal information to not only the U.S. government, but to anyone with an internet connection.

Why Social Media Privacy Matters

The administration’s new policy is a dangerous expansion of existing social media collection efforts. While the State Department has required since 2019 that visa applicants disclose their social media handles—a policy EFF has consistently opposed—forcing applicants to make their accounts public crosses a new line.

Individuals have significant privacy interests in their social media accounts. Social media profiles contain some of the most intimate details of our lives, such as our political views, religious beliefs, health information, likes and dislikes, and the people with whom we associate. Such personal details can be gleaned from vast volumes of data given the unlimited storage capacity of cloud-based social media platforms. As the Supreme Court has recognized, “[t]he sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions”—all of which and more are available on social media platforms.

By requiring visa applicants to share these details, the government can obtain information that would otherwise be inaccessible or difficult to piece together across disparate locations. For example, while visa applicants are not required to disclose their political views in their applications, applicants might choose to post their beliefs on their social media profiles.

This information, once disclosed, doesn’t just disappear. Existing policy allows the government to continue surveilling applicants’ social media profiles even once the application process is over. And personal information obtained from applicants’ profiles can be collected and stored in government databases for decades.

What’s more, by requiring visa applicants to make their private social media accounts public, the administration is forcing them to expose troves of personal, sensitive information to the entire internet, not just the U.S. government. This could include various bad actors like identity thieves and fraudsters, foreign governments, current and prospective employers, and other third parties.

Those in applicants’ social media networks—including U.S. citizen family or friends—can also become surveillance targets by association. Visa applicants’ online activity is likely to reveal information about the users with whom they’re connected. For example, a visa applicant could tag another user in a political rant or posts photos of themselves and the other user at a political rally. Anyone who sees those posts might reasonably infer that the other user shares the applicant’s political beliefs. The administration’s new requirement will therefore publicly expose the personal information of millions of additional people, beyond just visa applicants.

There are Very Good Reasons to Keep Social Media Accounts Private

An overwhelming number of social media users maintain private accounts for the same reason we put curtains on our windows: a desire for basic privacy. There are numerous legitimate reasons people choose to share their social media only with trusted family and friends, whether that’s ensuring personal safety, maintaining professional boundaries, or simply not wanting to share personal profiles with the entire world.

Safety from Online Harassment and Physical Violence

Many people keep their accounts private to protect themselves from stalkers, harassers, and those who wish them harm. Domestic violence survivors, for example, use privacy settings to hide from their abusers, and organizations supporting survivors often encourage them to maintain a limited online presence.

Women also face a variety of gender-based online harms made worse by public profiles, including stalking, sexual harassment, and violent threats. A 2021 study reported that at least 38% of women globally had personally experienced online abuse, and at least 85% of women had witnessed it. Women are, in turn, more likely to activate privacy settings than men.

LGBTQ+ individuals similarly have good reasons to lock down their accounts. Individuals from countries where their identity puts them in danger rely on privacy protections to stay safe from state action. People may also reasonably choose to lock their accounts to avoid the barrage of anti-LGBTQ+ hate and harassment that is common on social media platforms, which can lead to real-world violence. Others, including LGBTQ+ youth, may simply not be ready to share their identity outside of their chosen personal network.

Political Dissidents, Activists, and Journalists

Activists working on sensitive human rights issues, political dissidents, and journalists use privacy settings to protect themselves from doxxing, harassment, and potential political persecution by their governments.

Rather than protecting these vulnerable groups, the administration’s policy instead explicitly targets political speech. The State Department has given embassies and consulates a vague directive to vet applicants’ social media for “hostile attitudes towards our citizens, culture, government, institutions, or founding principles,” according to an internal State Department cable obtained by multiple news outlets. This includes looking for “applicants who demonstrate a history of political activism.” The cable did not specify what, exactly, constitutes “hostile attitudes.”

Professional and Personal Boundaries

People use privacy settings to maintain boundaries between their personal and professional lives. They share family photos, sensitive updates, and personal moments with close friends—not with their employers, teachers, professional connections, or the general public.

The Growing Menace of Social Media Surveillance

This new policy is an escalation of the Trump administration’s ongoing immigration-related social media surveillance. EFF has written about the administration’s new “Catch and Revoke” effort, which deploys artificial intelligence and other data analytic tools to review the public social media accounts of student visa holders in an effort to revoke their visas. And EFF recently submitted comments opposing a USCIS proposal to collect social media identifiers from visa and green card holders already living in the U.S., including when they submit applications for permanent residency and naturalization.

The administration has also started screening many non-citizens' social media accounts for ambiguously-defined “antisemitic activity,” and previously announced expanded social media vetting for any visa applicant seeking to travel specifically to Harvard University for any purpose.

The administration claims this mass surveillance will make America safer, but there’s little evidence to support this. By the government’s own previous assessments, social media surveillance has not proven effective at identifying security threats.

At the same time, these policies gravely undermine freedom of speech, as we recently argued in our USCIS comments. The government is using social media monitoring to directly target and punish through visa denials or revocations foreign students and others for their digital speech. And the social media surveillance itself broadly chills free expression online—for citizens and non-citizens alike.

In defending the new requirement, the State Department argued that a U.S. visa is a “privilege, not a right.” But privacy and free expression should not be privileges. These are fundamental human rights, and they are rights we abandon at our peril.

Lisa Femia

We're Envisioning A Better Future

1 month ago

Whether you've been following EFF for years or just discovered us (hello!), you've probably noticed that our team is kind of obsessed with the ✨future✨.

From people soaring through the sky, to space cats, geometric unicorns, and (so many) mechas—we're always imagining what the future could look like when we get things right.

That same spirit inspired EFF's 35th anniversary celebration. And this year, members can get our new EFF 35 Cityscape t-shirt plus a limited-edition challenge coin with a monthly or annual Sustaining Donation!

Join eFF!

Start a Convenient recurring donation Today!

The EFF 35 Cityscape proposes a future where users are empowered to

  • Repair and tinker with their devices
  • Move freely without being tracked
  • Innovate with bold new ideas

And this future isn't far off—we're building it now.

EFF is pushing for right to repair laws across the country, exposing shady data brokers, and ensuring new technologies—like AI—have your rights in mind. EFF is determined and with your help, we're not backing down.

We're making real progress—but we need your help. As a member-supported nonprofit, you are what powers this work.

Start a Sustaining Donation of $5/month or $65/year by August 11, and we'll thank you with a limited-edition EFF35 Challenge Coin as well as this year's Cityscape t-shirt!

Christian Romero

EFF to Court: Protect Our Health Data from DHS

1 month ago

The federal government is trying to use Medicaid data to identify and deport immigrants. So EFF and our friends at EPIC and the Protect Democracy Project have filed an amicus brief asking a judge to block this dangerous violation of federal data privacy laws.

Last month, the AP reported that the U.S. Department of Health and Human Services (HHS) had disclosed to the U.S. Department of Homeland Security (DHS) a vast trove of sensitive data obtained from states about people who obtain government-assisted health care. Medicaid is a federal program that funds health insurance for low-income people; it is partially funded and primarily managed by states. Some states, using their own funds, allow enrollment by non-citizens. HHS reportedly disclosed to DHS the Medicaid enrollee data from several of these states, including enrollee names, addresses, immigration status, and claims for health coverage.

In response, California and 19 other states sued HHS and DHS. The states allege, among other things, that these federal agencies violated (1) the data disclosure limits in the Social Security Act, the Privacy Act, and HIPAA, and (2) the notice-and-comment requirements for rulemaking under the Administrative Procedure Act (APA).

Our amicus brief argues that (1) disclosure of sensitive Medicaid data causes a severe privacy harm to the enrolled individuals, (2) the APA empowers federal courts to block unlawful disclosure of personal data between federal agencies, and (3) the broader public is harmed by these agencies’ lack of transparency about these radical changes in data governance.

A new agency agreement, recently reported by the AP, allows Immigration and Customs Enforcement (ICE) to access the personal data of Medicaid enrollees held by HHS’ Centers for Medicare and Medicaid Services (CMS). The agreement states: “ICE will use the CMS data to allow ICE to receive identity and location information on aliens identified by ICE.”

In the 1970s, in the wake of the Watergate and COINTELPRO scandals, Congress wisely enacted numerous laws to protect our data privacy from government misuse. This includes strict legal limits on disclosure of personal data within an agency, or from one agency to another. EFF sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, and filed an amicus brief in a suit challenging ICE grabbing taxpayer data. We’ve also reported on the U.S. Department of Agriculture’s grab of food stamp data and DHS’s potential grab of postal data. And we’ve written about the dangers of consolidating all government information.

We have data protection rules for good reason, and these latest data grabs are exactly why.

You can read our new amicus brief here.

Adam Schwartz

Dating Apps Need to Learn How Consent Works

1 month ago

Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit. But dating apps are taking shortcuts in safeguarding the privacy and security of users in favour of developing and deploying AI tools on their platforms, sometimes by using your most personal information to train their AI tools. 

Grindr has big plans for its gay wingman bot, Bumble launched AI Icebreakers, Tinder introduced AI tools to choose profile pictures for users, OKCupid teamed up with AI photo editing platform Photoroom to erase your ex from profile photos, and Hinge recently launched an AI tool to help users write prompts.

The list goes on, and the privacy harms are significant. Dating apps have built platforms that encourage people to be exceptionally open with sensitive and potentially dangerous personal information. But at the same time, the companies behind the platforms collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and connection. This data falling into the wrong hands can—and has—come with unacceptable consequences, especially for members of the LGBTQ+ community. 

This is why corporations should provide opt-in consent for AI training data obtained through channels like private messages, and employ minimization practices for all other data. Dating app users deserve the right to privacy, and should have a reasonable expectation that the contents of conversations—from text messages to private pictures—are not going to be shared or used for any purpose that opt-in consent has not been provided for. This includes the use of personal data for building AI tools, such as chatbots and picture selection tools. 

AI Icebreakers

Back in December 2023, Bumble introduced AI Icebreakers to the ‘Bumble for Friends’ section of the app to help users start conversations by providing them with AI-generated messages. Powered by OpenAI’s ChatGPT, the feature was deployed in the app without ever asking for their consent. Instead, the company presented users with a pop-up upon entering the app which repeatedly nudged people to click ‘Okay’ or face the same pop-up every time the app is reopened until individuals finally relent and tap ‘Okay.’

Obtaining user data without explicit opt-in consent is bad enough. But Bumble has taken this even further by sharing personal user data from its platform with OpenAI to feed into the company’s AI systems. By doing this, Bumble has forced its AI feature on millions of users in Europe—without their consent but with their personal data.

In response, European nonprofit noyb recently filed a complaint with the Austrian data protection authority on Bumble’s violation of its transparency obligations under Article 5(1)(a) GDPR. In its report, noyb flagged concerns around Bumble’s data sharing with OpenAI, which allowed the company to generate an opening message based on information users shared on the app. 

In its complaint, noyb specifically alleges that Bumble: 

  • Failed to provide information about the processing of personal data for its AI Icebreaker feature 
  • Confused users with a “fake” consent banner
  • Lacks a legal basis under Article 6(1) GDPR as it never sought user consent and cannot legally claim to base its processing on legitimate interest 
  • Can only process sensitive data—such as data involving sexual orientation—with explicit consent per Article 9 GDPR
  • Failed to adequately respond to the complainant’s access request, regulated through Article 15 GDPR.
AI Chatbots for Dating

Grindr recently launched its AI wingman. The feature operates like a chatbot and currently keeps track of favorite matches and suggests date locations. In the coming years, Grindr plans for the chatbot to send messages to other AI agents on behalf of users, and make restaurant reservations—all without human intervention. This might sound great: online dating without the time investment? A win for some! But privacy concerns remain. 

The chatbot is being built in collaboration with a third party company called Ex-human, which raises concerns about data sharing. Grindr has communicated that its users’ personal data will remain on its own infrastructure, which Ex-Human does not have access to, and that users will be “notified” when AI tools are available on the app. The company also said that it will ask users for permission to use their chat history for AI training. But AI data poses privacy risks that do not seem fully accounted for, particularly in places where it’s not safe to be outwardly gay. 

In building this ‘gay chatbot,’ Grindr’s CEO said one of its biggest limitations was preserving user privacy. It’s good that they are cognizant of these harms, particularly because the company has a terrible track record of protecting user privacy, and the company was also recently sued for allegedly revealing the HIV status of users. Further, direct messages on Grindr are stored on the company’s servers, where you have to trust they will be secured, respected, and not used to train AI models without your consent. Given Grindr’s poor record of not respecting user consent and autonomy on the platform, users need additional protections and guardrails for their personal data and privacy than currently being provided—especially for AI tools that are being built by third parties. 

AI Picture Selection  

In the past year, Tinder and Bumble have both introduced AI tools to help users choose better pictures for their profiles. Tinder’s AI-powered feature, Photo Selector, requires users to upload a selfie, after which its facial recognition technology can identify the person in their camera roll images. The Photo Selector then chooses a “curated selection of photos” direct from users’ devices based on Tinder’s “learnings” about good profile images. Users are not informed about the parameters behind choosing photos, nor is there a separate privacy policy introduced to guardrail privacy issues relating to the potential collection of biometric data, and collection, storage, and sale of camera roll images. 

The Way Forward: Opt-In Consent for AI Tools and Consumer Privacy Legislation 

Putting users in control of their own data is fundamental to protecting individual and collective privacy. We all deserve the right to control how our data is used and by whom. And when it comes to data like profile photos and private messages, all companies should require opt-in consent before processing those messages for AI. Finding love should not involve such a privacy impinging tradeoff.

At EFF, we’ve also long advocated for the introduction of comprehensive consumer privacy legislation to limit the collection of our personal data at its source and prevent retained data being sold or given away, breached by hackers, disclosed to law enforcement, or used to manipulate a user’s choices through online behavioral advertising. This would help protect users on dating apps as reducing the amount of data collected prevents the subsequent use in ways like building AI tools and training AI models. 

The privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities, but these steps are essential to protecting users on dating apps. We urge companies to put people over profit and protect privacy on their platforms.

Paige Collings

When Your Power Meter Becomes a Tool of Mass Surveillance

1 month ago

Simply using extra electricity to power some Christmas lights or a big fish tank shouldn’t bring the police to your door. In fact, in California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise “smart” meter data in most cases. 

Despite this, Sacramento’s power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. The Electronic Frontier Foundation (EFF) is seeking to end Sacramento’s dragnet surveillance of energy customers and have asked for a court order to stop this practice for good.

For a decade, the Sacramento Municipal Utilities District (SMUD) has been searching through all of its customers’ energy data, and passed on more than 33,000 tips about supposedly “high” usage households to police. Ostensibly looking for homes that were growing illegal amounts of cannabis, SMUD analysts have admitted that such “high” power usage could come from houses using air conditioning or heat pumps or just being large. And the threshold of so-called “suspicion” has steadily dropped, from 7,000 kWh per month in 2014 to just 2,800 kWh a month in 2023. One SMUD analyst admitted that they themselves “used 3500 [kWh] last month.”

This scheme has targeted Asian customers. SMUD analysts deemed one home suspicious because it was “4k [kWh], Asian,” and another suspicious because “multiple Asians have reported there.” Sacramento police sent accusatory letters in English and Chinese, but no other language, to residents who used above-average amounts of electricity.

In 2022, EFF and the law firm Vallejo, Antolin, Agarwal, Kanter LLP sued SMUD and the City of Sacramento, representing the Asian American Liberation Network and two Sacramento County residents. One is an immigrant from Vietnam. Sheriff’s deputies showed up unannounced at his home, falsely accused him of growing cannabis based on an erroneous SMUD tip, demanded entry for a search, and threatened him with arrest when he refused. He has never grown cannabis; rather, he consumes more than average electricity due to a spinal injury.

Last week, we filed our main brief explaining how this surveillance program violates the law and why it must be stopped. California’s state constitution bars unreasonable searches. This type of dragnet surveillance — suspicionless searches of entire zip codes worth of customer energy data — is inherently unreasonable. Additionally, a state statute generally prohibits public utilities from sharing such data. As we write in our brief, the Sacramento’s mass surveillance scheme does not qualify for one of the narrow exceptions to this rule. 

Mass surveillance violates the privacy of many individuals, as police without individualized suspicion seek (possibly non-existent) evidence of some kind of offense by some unknown person. As we’ve seen time and time again, innocent people inevitably get caught in the dragnet. For decades, EFF has been exposing and fighting these kinds of dangerous schemes. We remain committed to protecting digital privacy, whether it’s being threatened by national governments – or your local power company.

Related Cases: Asian American Liberation Network v. SMUD, et al.
Hudson Hongo

EFF to Court: The DMCA Didn't Create a New Right of Attribution, You Shouldn't Either

1 month 1 week ago

Amid a wave of lawsuits targeting how AI companies use copyrighted works to train large language models that generate new works, a peculiar provision of copyright law is suddenly in the spotlight: Section 1202 of the Digital Millennium Copyright Act (DMCA). Section 1202 restricts intentionally removing or changing copyright management information (CMI), such as a signature on a painting or attached to a photograph. Passed in 1998, the rule was supposed to help rightsholders identify potentially infringing uses of their works and encourage licensing.

Open AI and Microsoft used code from Github as part of the training data for their LLMs, along with billions of other works. A group of anonymous Github contributors sued, arguing that those LLMs generated new snippets of code that were substantially similar to theirs—but with the CMI stripped. Notably, they did not claim that the new code was copyright infringement—they are relying solely on Section 1202 of the DMCA. Their problem? The generated code is different from their original work, and courts across the US have adopted an “identicality rule,” on the theory that Section 1202 is supposed to apply only when CMI is removed from existing works, not when it’s simply missing from a new one.

It may sound like an obscure legal question, but the outcome of this battle—currently before the Ninth Circuit Court of Appeals—could have far-reaching implications beyond generative AI technologies. If the rightholders were correct, Section 1202 effectively creates a freestanding right of attribution, creating potential liability even for non-infringing uses, such as fair use, if those new uses simply omit the CMI. While many fair users might ultimately escape liability under other limitations built into Section 1202, the looming threat of litigation, backed by risk of high and unpredictable statutory penalties, will be enough to pressure many defendants to settle. Indeed, an entire legal industry of “copyright trolls” has emerged to exploit this dynamic, with no corollary benefit to creativity or innovation.

Fortunately, as we explain in a brief filed today, the text of Section 1202 doesn’t support such an expansive interpretation. The provision repeatedly refers to “works” and “copies of works”—not “substantially similar” excerpts or new adaptations—and its focus on “removal or alteration” clearly contemplates actions taken with respect to existing works, not new ones. Congress could have chosen otherwise and written the law differently. Wisely it did not, thereby ensuring that rightsholders couldn’t leverage the omission of CMI to punish or unfairly threaten otherwise lawful re-uses of a work.

Given the proliferation of copyrighted works in virtually every facet of daily life, the last thing any court should do is give rightsholders a new, freestanding weapon against fair uses. As the Supreme Court once observed, copyright is a “tax on readers for the purpose of giving a bounty to writers.” That tax—including the expense of litigation—can be an important way to encourage new creativity, but it should not be levied unless the Copyright Act clearly requires it.

Corynne McSherry

California A.B. 412 Stalls Out—A Win for Innovation and Fair Use

1 month 1 week ago

A.B. 412, the flawed California bill that threatened small developers in the name of AI “transparency,” has been delayed and turned into a two-year bill. That means it won’t move forward in 2025—a significant victory for innovation, freedom to code, and the open web.

EFF opposed this bill from the start. A.B. 412 tried to regulate generative AI, not by looking at the public interest, but by mandating training data “reading lists” designed to pave the way for new copyright lawsuits, many of which are filed by large content companies. 

Transparency in AI development is a laudable goal. But A.B. 412 failed to offer a fair or effective path to get there. Instead, it gave companies large and small the impossible task of differentiating between what content was copyrighted and what wasn’t—with severe penalties for anyone who couldn’t meet that regulation. That would have protected the largest AI companies, but frozen out smaller and non-commercial developers who might want to tweak or fine-tune AI systems for the public good. 

The most interesting work in AI won’t necessarily come from the biggest companies. It will come from small teams, fine-tuning for accessibility, privacy, and building tools that identify AI harms. And some of the most valuable work will be done using source code under permissive licenses. 

A.B. 412 ignored those facts, and would have punished some of the most worthwhile projects. 

The Bill Blew Off Fair Use Rights

The question of whether—and how much—AI training qualifies as fair use is being actively litigated right now in federal courts. And so far, courts have found much of this work to be fair use. In a recent landmark AI case, Bartz v. Anthropic, for example, a federal judge found that AI training work is “transformative—spectacularly so.” He compared it to how search engines copy images and text in order to provide useful search results to users.

Copyright is federally governed. When states try to rewrite the rules, they create confusion—and more litigation that doesn’t help anyone.

If lawmakers want to revisit AI transparency, they need to do so without giving rights-holders a tool to weaponize copyright claims. That means rejecting A.B. 412’s approach—and crafting laws that protect speech, competition, and the public’s interest in a robust, open, and fair AI ecosystem. 

Joe Mullin

Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance

1 month 1 week ago

Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices. 

This is a bad, bad step for Ring and the broader public. 

Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement

Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device. 

It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted. 

Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device. 

After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance. 

Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously. 

No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.

Shame on Ring.

Matthew Guariglia

We Support Wikimedia Foundation’s Challenge to UK’s Online Safety Act

1 month 1 week ago

The Electronic Frontier Foundation and ARTICLE 19 strongly support the Wikimedia Foundation’s legal challenge to the categorization regulations of the United Kingdom’s Online Safety Act.

The Foundation – the non-profit that operates Wikipedia and other Wikimedia projects – announced its legal challenge earlier this year, arguing that the regulations endanger Wikipedia and the global community of volunteer contributors who create the information on the site. The High Court of Justice in London will hear the challenge on July 22 and 23.

EFF and ARTICLE 19 agree with the Foundation’s argument that, if enforced, the Category 1 duties - the OSA’s most stringent obligations – would undermine the privacy and safety of Wikipedia’s volunteer contributors, expose the site to manipulation and divert essential resources from protecting people and improving the site. For example, because the law requires Category 1 services to allow users to block all unverified users from editing any content they post, the law effectively requires the Foundation to verify the identity of many Wikipedia contributors. However, that compelled verification undermines the privacy that keeps the site’s volunteers safe.

Wikipedia is the world’s most trusted and widely used encyclopedia, with users across the word accessing its wealth of information and participating in free information exchange through the site. The OSA must not be allowed to diminish it and jeopardize the volunteers on which it depends.

Beyond the issues raised in Wikimedia’s lawsuit, EFF and ARTICLE 19 emphasize that the Online Safety Act poses a serious threat to freedom of expression and privacy online, both in the U.K. and globally. Several key provisions of the law become operational July 25, and some companies already are rolling out age-verification mechanisms which undermine free expression and privacy rights of both adults and minors.

David Greene

Radio Hobbyists, Rejoice! Good News for LoRa & Mesh

1 month 1 week ago

A set of radio devices and technologies are opening the doorway to new and revolutionary forms of communication. These have the potential to break down the over-reliance on traditional network hierarchies, and present collaborative alternatives where resistance to censorship, control and surveillance are baked into the network topography itself. Here, we look at a few of these technologies and what they might mean for the future of networked communications.

The idea of what is broadly referred to as mesh networking isn’t new: the resilience and scalability of mesh technology has seen it adopted in router and IoT protocols for decades. What’s new is cheap devices that can be used without a radio license to communicate over (relatively) large distances, or LOng RAnge, thus the moniker LoRa.

Although using different operating frequencies in different countries, LoRa works in essentially the same way everywhere. It uses Chirp Spread Spectrum to broadcast digital communications across a physical landscape, with a range of several kilometers in the right environmental conditions. When other capable devices pick up a signal, they can then pass it along to other nodes until the message reaches its destination—all without relying on a single centralized host. 

These communications are of very low bit-rate—often less than a few KBps (kilobytes per second) at a distance—and use very little power. You won’t be browsing the web or streaming video over LoRa, but it is useful for sending messages in a wide range of situations where traditional infrastructure is lacking or intermittent, and communication with others over dispersed or changing physical terrain is essential. For instance, a growing body of research is showing how Search and Rescue (SAR) teams can greatly benefit from the use of LoRa, specifically when coupled with GPS sensors, and especially when complimented by line-of-sight LoRa repeaters.

Meshtastic

The most popular of these indie LoRa communication systems is Meshtastic by far. For hobbyists just getting started in the world of LoRa mesh communications, it is the easiest way to get up, running, and texting with others in your area that also happen to have a Meshtastic-enabled device. It also facilitates direct communication with other nodes using end-to-end encryption. And by default, a Meshtastic device will repeat messages to others if originating from 3 or fewer nodes (or “hops”) away. This means messages tend to propagate farther with the power of the mesh collaborating to make delivery possible. As a single-application use of LoRa, it is an exciting experiment to take part in.

Reticulum

While Reticulum is often put into the same category as Meshtastic, and although both enable communication over LoRa, the comparison breaks down quickly after that. Reticulum is not a single application, but an entire network stack that can be arbitrarily configured to connect through existing TCP/IP, the anonymizing I2P network, directly through a local WiFi connection, or through LoRa radios. The Reticulum network’s LXMF transfer protocol allows arbitrary applications to be built on top of it, such as messaging, voice calls, file transfer, and light-weight, text-only browsing. And that’s only to name a few applications which have already been developed—the possibilities are endless.

Although there are a number of community hubs to join which are being run by Reticulum enthusiasts, you don’t have to join any of them, and can build your own Reticulum network with the devices and transports of you and your friends, locally over LoRa or remotely over traditional infrastructure, and bridge them as you please. Nodes themselves are universally addressed and sovereign, meaning they are free to connect anywhere and not lose the universally unique address which defines them. All communications between nodes are encrypted end-to-end, using a strong choice of cryptographic primitives. And although it’s been actively developed for over a decade, it recently reached the noteworthy milestone of a 1.0 release. It’s a very exciting ecosystem to be a part of, and we can’t wait to see the community develop it even further. A number of clients are available to start exploring.

Resilient Infrastructure

On a more somber note, let’s face it: we live in an uncertain world. With the frequency of environmental disasters, political polarization, and infrastructure attacks increasing, the stability of networks we have traditionally relied upon is far from assured.

Yet even with the world as it is, developers are creating new communications networks that have the potential to help in unexpected situations we might find ourselves in. Not only are these technologies built to be useful and resilient, they are also empowering individuals by circumventing censorship and platform control— allowing a way for people to empower each other through sharing resources.

In that way, it can be seen as a technological inheritor of the hopefulness and experimentation—and yes, fun!—that was so present in the early internet. These technologies offer a promising path forward for building our way out of tech dystopia.

Bill Budington

EFF and 80 Organizations Call on EU Policymakers to Preserve Net Neutrality in the Digital Networks Act

1 month 1 week ago

As the European Commission prepares an upcoming proposal for a Digital Networks Act (DNA), a growing network of groups are raising serious concerns about the resurgence of “fair share” proposals from major telecom operators. The original idea was to introduce network usage fees on certain companies to pay ISPs. We have said it before and we’ll say it again: there is nothing fair about this “fair share” proposal, which could undermine net neutrality and hurt consumers by changing how content is delivered online. Now the EU Commission is toying with an alternative idea: the introduction of a dispute resolution mechanism to foster commercial agreements between tech firms and telecom operators.

EFF recently joined a broad group of more than 80 signatories, from civil society organizations to audio-visual companies in a joint statement aimed at preserving net neutrality in the DNA.

In the letter, we argue that the push to introduce a mandatory dispute resolution mechanism into EU law would pave the way for content and application providers (CAPs) to pay network fees for delivering traffic. These ideas, recycled from 2022, are being marketed as necessary for funding infrastructure, but the real cost would fall on the open internet, competition, and users themselves.

This isn't just about arcane telecom policy—it’s a battle over the future of the internet in Europe. If the DNA includes mechanisms that force payments from CAPs, we risk higher subscription costs, fewer services, and less innovation, particularly for European startups, creatives, and SMEs. Worse still, there’s no evidence of market failure to justify such regulatory intervention. Regulators like BEREC have consistently found that the interconnection market is functioning smoothly. What’s being proposed is nothing short of a power grab by legacy telecom operators looking to resurrect outdated, monopolistic business models. Europe has long championed an open, accessible internet—now’s the time to defend it.

Jillian C. York

🤕 A Surveillance Startup in Damage Control | EFFector 37.8

1 month 1 week ago

We're a little over halfway through the year! Which... could be good or bad depending on your outlook... but nevermind that—EFF is here to keep you updated on the latest digital rights news, and we've got you covered with an all-new EFFector!

With issue 37.8, we're covering a recent EFF investigation into AI-generated police reports, a secret deal to sell flight passenger data to the feds (thanks data brokers), and why mass surveillance cannot be fixed with a software patch. 

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF's Associate Director of Activism Sarah Hamid explains the harms caused by ALPRs and what you can do to fight back. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.8 - A Surveillance Startup In Damage Control

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Podcast Episode: Finding the Joy in Digital Security

1 month 1 week ago

Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning, and retain more knowledge, when they're having a good time. It doesn’t mean the topic isn’t serious – it’s just about intentionally approaching a serious topic with joy.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F78eed2f8-094f-4ad7-980e-fb68168f32ba%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

That’s how Helen Andromedon approaches her work as a digital security trainer in East Africa. She teaches human rights defenders how to protect themselves online, creating open and welcoming spaces for activists, journalists, and others at risk to ask hard questions and learn how to protect themselves against online threats. She joins EFF’s Cindy Cohn and Jason Kelley to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 

In this episode you’ll learn about:

  • How the Trump Administration’s shuttering of the United States Agency for International Development (USAID) has led to funding cuts for digital security programs in Africa and around the world, and why she’s still optimistic about the work
  • The importance of helping women feel safe and confident about using online platforms to create positive change in their communities and countries
  • Cultivating a mentorship model in digital security training and other training environments
  • Why diverse input creates training models that are accessible to a wider audience
  • How one size never fits all in digital security solutions, and how Dungeons & Dragons offers lessons to help people retain what they learn 

Helen Andromedon – a moniker she uses to protect her own security – is a digital security trainer in East Africa who helps human rights defenders learn how to protect themselves and their data online and on their devices. She played a key role in developing the Safe Sisters project, which is a digital security training program for women. She’s also a UX researcher and educator who has worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women’s Development Fund

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

HELEN ANDROMEDON: I'll say it bluntly. Learning should be fun. Even if I'm learning about your tool, maybe you design a tutorial that is fun for me to read through, to look at. It seems like that helps with knowledge retention.
I've seen people responding to activities and trainings that are playful. And yet we are working on a serious issue. You know, we are developing an advocacy campaign, it's a serious issue, but we are also having fun.

CINDY COHN: That's Helen Andromedan talking about the importance of joy and play in all things, but especially when it comes to digital security training. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: This show is all about envisioning a better digital world for everyone. Here at EFF, we often specialize in thinking about worst case scenarios and of course, jumping in to help when bad things happen. But the conversations we have here are an opportunity to envision the better world we can build if we start to get things right online.

JASON KELLEY: Our guest today is someone who takes a very active role in helping people take control of their digital lives and experiences.

CINDY COHN: Helen Andromedon - that's a pseudonym by the way, and a great one at that – is a digital security trainer in East Africa. She trains human rights defenders in how to protect themselves digitally. She's also a UX researcher and educator, and she's worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women's Development Fund.
She also played a key role in developing the Safe Sisters project, which is a digital security training, especially designed for women. Welcome Helen. Thank you so much for joining us.

HELEN ANDROMEDON: Thanks for having me. I've been a huge fan of the tools that came out of EFF and working with Ford Foundation. So yeah, it's such a blast to be here.

CINDY COHN: Wonderful. So we're in a time when a lot of people around the world are thinking more seriously than ever about how to protect their privacy and security. and that's, you know, from companies, but increasingly from governments and many, many other potential bad actors.
You know, there's no one size fits all training, as we know. And the process of determining what you need to protect and from whom you need to protect it is different for everybody. But we're particularly excited to talk to you, Helen, because you know that's what you've been doing for a very long time. And we want to hear how you think about, you know, how to make the resources available to people and make sure that the trainings really fit them. So can you start by explaining what the Safe Sisters project is?

HELEN ANDROMEDON: It's a program that came out of a collaboration amongst friends, but friends who were also working in different organizations and also were doing trainings. In the past, what would have it would be, we would send out an application, Hey, there's a training going on. But there was a different number of women that would actually apply to this fellowship.
It would always be very unequal. So what we decided to do is really kind of like experimenting is say, what if we do a training but only invite, women and people who are activists, people who are journalists, people who are really high risk, and give them a space to ask those hard questions because there are so many different things that come out of suffering online harassment and going through that in your life, you, when you need to share it, sometimes you do need a space where you don't feel judged, where you can kind of feel free to engage in really, really traumatic topics. So this fellowship was created, it had this unique percentage of people that would apply and we started in East Africa.
I think now because of what has happened in the last I, I guess three months, it has halted our ability to run the program in as many. Regions that need it. Um, but Safe Sister, I think what I see, it is a tech community of people who are able to train others or help others solve a problem.
So what problems do, I mean, so for example, I, I think I left my, my phone in the taxi. So what do I do? Um, how do I find my phone? What happens to all my data? Or maybe it could be a case of online harassment where there's some sort of revenge from the other side, from the perpetrator, trying to make the life of the victim really, really difficult at the moment.
So we needed people to be able to have solutions available to talk about and not just say, okay, you are a victim of harassment. What should I do? There's nothing to do, just go offline. No, we need to respond, but many of us don't have the background in ICT, uh, for example, in my region. I think that it is possible now to get a, a good background in IT or ICT related courses, um, up to, um, you know, up to PhD level even.
But sometimes I've, in working with Safe Sister, I've noticed that even such people might not be aware of the dangers that they are facing. Even when they know OPSEC and they're very good at it. They might not necessarily understand the risks. So we decided to keep working on the content each year, every time we can run the program, work on the content: what are the issues, currently, that people are facing? How can we address them through an educational fellowship, which is very, very heavy on mentorship. So mentorship is also a thing that we put a lot of stress on because again, we know that people don't necessarily have the time to take a course or maybe learn about encryption, but they are interested in it. So we want to be able to serve all the different communities and the different threat models that we are seeing.

CINDY COHN: I think that's really great and I, I wanna, um, drill in a couple of things. So first thing you, uh, ICT, internet Communications Technologies. Um, but what I, uh, what I think is really interesting about your approach is the way the fellowship works. You know, you're kind of each one teach one, right?
You're bringing in different people from communities. And if you know, most of us, I think as a, as a model, you know, finding a trusted person who can give you good information is a lot easier than going online and finding information all by yourself. So by kind of seeding these different communities with people who've had your advanced training, you're really kind of able to grow who gets the information. Is that part of the strategy to try to have that?

HELEN ANDROMEDON: It's kind of like two ways. So there is the way where we, we want people to have the information, but also we want people to have the correct information.
Because there is so much available, you can just type in, you know, into your URL and say, is this VPN trusted? And maybe you'll, you'll find a result that isn't necessarily the best one.
We want people to be able to find the resources that are guaranteed by, you know, EFF or by an organization that really cares about digital rights.

CINDY COHN: I mean, that is one of the problems of the current internet. When I started out in the nineties, there just wasn't information. And now really the role of organizations like yours is sifting through the misinformation, the disinformation, just the bad information to really lift up, things that are more trustworthy. It sounds like that's a lot of what you're doing.

HELEN ANDROMEDON: Yeah, absolutely. How I think it's going, I think you, I mean, you mentioned that it's kind of this cascading wave of, you know, knowledge, you know, trickling down into the communities. I do hope that's where it's heading.
I do see people reaching out to me who have been at Safe Sisters, um, asking me, yo Helen, which training should I do? You know, I need content for this. And you can see that they're actively engaging still, even though they went through the fellowship like say four years ago. So that I think is like evidence that maybe it's kind of sustainable, yeah.

CINDY COHN: Yeah. I think so. I wanted to drill down on one other thing you said, which is of course, you mentioned the, what I think of as the funding cuts, right, the Trump administration cutting off money for a lot of the programs like Safe Sisters, around the world. and I know there are other countries in Europe that are also cutting, support for these kind of programs.
Is that what you mean in terms of what's happened in the last few months?

HELEN ANDROMEDON: Yeah. Um, it's really turned around what our expectations for the next couple of years say, yeah, it's really done so, but also there's an opportunity for growth to recreate how, you know, what kind of proposals to develop. It's, yeah, it's always, you know, these things. Sometimes it's always just a way to change.

CINDY COHN: I wanna ask one more question. I really will let Jason ask some at some point, but, um, so what does the world look like if we get it right? Like if your work is successful, and more broadly, the internet is really supporting these kind of communities right now, what does it look like for the kind of women and human rights activists who you work with?

HELEN ANDROMEDON: I think that most of them would feel more confident to use those platforms for their work. So that gives it an extra boost because then they can be creative about their actions. Maybe it's something, maybe they want, you know, uh, they are, they are demonstrating against, uh, an illegal and inhumane act that has passed through parliament.
So online platforms. If they could, if it could be our right and if we could feel like the way we feel, you know, in the real world. So there's a virtual and a real world, you're walking on the road and you know you can touch things.
If we felt ownership of our online spaces so that you feel confident to create something that maybe can change. So in, in that ideal world, it would be that the women can use online spaces to really, really boost change in their communities and have others do so as well because you can teach others and you inspire others to do so. So it's, like, pops up everywhere and really makes things go and change.
I think also for my context, because I've worked with people in very repressive regimes where it is, the internet can be taken away from you. So it's things like the shutdowns, it's just ripped away from you. Uh, you can no longer search, oh, I have this, you know, funny thing on my dog. What should I do? Can I search for the information? Oh, you don't have the internet. What? It's taken away from you. So if we could have a way where the infrastructure of the internet was no longer something that was, like, in the hands of just a few people, then I think – So there's a way to do that, which I've recently learned from speaking to people who work on these things. It's maybe a way of connecting to the internet to go on the main highway, which doesn't require the government, um, the roadblocks and maybe it could be a kind of technology that we could use that could make that possible. So there is a way, and in that ideal world, it would be that, so that you can always find out, uh, what that color is and find out very important things for your life. Because the internet is for that, it's for information.
Online harassment, that one. I, I, yeah, I really would love to see the end of that. Um, just because, so also acknowledging that it's also something that has shown us. As human beings also something that we do, which is not be very kind to others. So it's a difficult thing. What I would like to see is that this future, we have researched it, we have very good data, we know how to avoid it completely. And then we also draw the parameters, so that everybody, when something happens to you, doesn't make you feel good, which is like somebody harassing you that also you are heard, because in some contexts, uh, even when you go to report to the police and you say, look, this happened to me. Sometimes they don't take it seriously, but because of what happens to you after and the trauma, yes, it is important. It is important and we need to recognize that. So it would be a world where you can see it, you can stop it.

CINDY COHN: I hear you and what I hear is that, that the internet should be a place where it's, you know, always available, and not subject to the whims of the government or the companies. There's technologies that can help do that, but we need to make them better and more widely available. That speaking out online is something you can do. And organizing online is something you can do. Um, but also that you have real accountability for harassment that might come as a response. And that could be, you know, technically protecting people, but also I think that sounds more like a policy and legal thing where you actually have resources to fight back if somebody, you know, misuses technology to try to harass you.

HELEN ANDROMEDON: Yeah, absolutely. Because right now the cases get to a point where it seems like depending on the whim of the person in charge, maybe if they go to, to report it, the case can just be dropped or it's not taken seriously. And then people do harm to themselves also, which is on, like, the extreme end and which is something that's really not, uh, nice to happen and should, it shouldn't happen.

CINDY COHN: It shouldn't happen, and I think it is something that disproportionately affects women who are online or marginalized people. Your vision of an internet where people can freely gather together and organize and speak is actually available to a lot of people around the world, but, but some people really don't experience that without tremendous blowback.
And that's, um, you know, that's some of the space that we really need to clear out so that it's a safe space to organize and make your voice heard for everybody, not just, you know, a few people who are already in power or have the, you know, the technical ability to protect themselves.

JASON KELLEY: We really want to, I think, help talk to the people who listen to this podcast and really understand and are building a better future and a better internet. You know, what kind of things you've seen when you train people. What are you thinking about when you're building these resources and these curriculums? What things come up like over and over that maybe people who aren't as familiar with the problems you've seen or the issues you've experienced.

HELEN ANDROMEDON: yeah, I mean the, Hmm, I, maybe they could be a couple of, of reasons that I think, um. What would be my view is, the thing that comes up in trainings is of course, you know, hesitation. there's this new thing and I'm supposed to download it. What is it going to do to my laptop?
My God, I share this laptop. What is it going to do? Now they tell me, do this, do this in 30 minutes, and then we have to break for lunch. So that's not enough time to actually learn because then you have to practice or you could practice, you could throw in a practice of a session, but then you leave this person and that person is as with normal.
Forget very normal. It happens. So the issues sometimes it's that kind of like hesitation to play with the tech toys. And I think that it's, good to be because we are cautious and we want to protect this device that was really expensive to get. Maybe it's borrowed, maybe it's secondhand.
I won't get, you know, like so many things that come up in our day to day because of, of the cost of things.

JASON KELLEY: You mentioned like what do you do when you leave your phone in a taxi? And I'll say that, you know, a few days ago I couldn't find my phone after I went somewhere and I completely freaked out. I know what I'm doing usually, but I was like, okay, how do I turn this thing off?
And I'm wondering like that taxi scenario, is that, is that a common one? Are there, you know, others that people experience there? I, I know you mentioned, you know, internet shutoffs, which happen far too frequently, but a lot of people probably aren't familiar with them. Is that a common scenario? You have to figure out what to do about, like, what are the things that pop up occasionally that, people listening to this might not be as aware of.

HELEN ANDROMEDON: So losing a device or a device malfunctioning is like the top one and internet shutdown is down here because they are not, they're periodic. Usually it's when there's an election cycle, that's when it happens. After that, you know, you sometimes, you have almost a hundred percent back to access. So I think I would put losing a device, destroying a device.
Okay, now what do I do now for the case of the taxi? The phone in the taxi. First of all, the taxi is probably crowded. So you don't think that phone will not be returned most likely.
So maybe there's intimate photos. You know, there's a lot, there's a lot that, you know, can be. So then if this person doesn't have a great password, which is usually the case because there is not so much emphasis when you buy a device. There isn't so much emphasis on, Hey, take time to make a strong password now. Now it's better. Now obviously there are better products available that teach you about device security as you are setting up the phone. But usually you buy it, you switch it on, so you don't really have the knowledge. This is a better password than that. Or maybe don't forget to put a password, for example.
So that person responding to that case would be now asking if they had maybe the find my device app, if we could use that, if that could work, like as you were saying, there's a possibility that it might, uh, bing in another place and be noticed and for sure taken away. So there's, it has to be kind of a backwards, a learning journey to say, let's start from ground zero.

JASON KELLEY: Let's take a quick moment to say thank you to our sponsor. How to Fix The Internet is supported by the Alfred p Sloan Foundation's program in public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You are the reason we exist.
You can become a member for just $25 and for a little more, you can get some great, very stylish gear. The more members we have, the more power we have in state houses, courthouses and on the streets.
EFF has been fighting for digital rights for decades, and that fight is bigger than ever. So please, if you like what we do, go to ff.org/pod to donate.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]
And now back to our conversation with Helen Andromedon.

CINDY COHN: So how do you find the people who come and do the trainings? How do you identify people who would be good fellows or who need to come in to do the training? Because I think that's its own problem, especially, you know, the Safe Sisters is very spread out among multiple countries.

HELEN ANDROMEDON: Right now it has been a combination of partners saying, Hey, we have an idea, and then seeing where the issues are.
As you know, a fellowship needs resources. So if there is an interest because of the methodology, at least, um, let's say it's a partner in Madagascar who is working on digital rights. They would like to make sure that their community, maybe staff and maybe people that they've given sub-grants to. So that entire community, they want to make sure that it is safe, they can communicate safely. Nothing, you know, is leaked out, they can work well. And they're looking for, how do we do this? We need trainers, we need content. we need somebody who understands also learning separate from the resources. So I think that the Safe Sister Fellowship also is something that, because it's like you can pick it up here and you can design it in, in whatever context you have.
I think that has made it like be stronger. You take it, you make it your own. So it has happened like that. So a partner has an interest. We have the methodology, we have the trainers, and then we have the tools as well. And then that's how it happens.

CINDY COHN: What I'm hearing here is that, you know, there's already a pretty strong network of partners across Africa and the communities you serve. there's groups and, you know, we know this from EFF, 'cause we hear from them as well ,that there are, there are actually a pretty well developed set of groups that are doing digital activism and human rights defenders using technology already across, uh, Africa and the rest of the communities. And that you have this network and you are the go-to people, uh, when people in the network realize they need a higher level of security thinking and training than they had. Does that sound right?

HELEN ANDROMEDON: sound right? Yeah. A higher level of our being aware And usually it comes down to how do we keep this information safe? Because we are having incidents. Yeah.

CINDY COHN: Do you have an incident that you could, you explain?

HELEN ANDROMEDON: Oh, um, queer communities, say, an incident of, executive director being kidnapped. And it was, we think, that it's probably got to do with how influential they were and what kind of message they were sending. So it, it's apparent. And then so shortly after that incident, there's a break-in into the, the office space. Now that one is actually quite common, um, especially in the civic space. So that one then, uh, if they, they were storing maybe case files, um, everything was in a hard copy. All the information was there, receipts, checks, um, payment details. That is very, very tragic in that case.
So in that, what we did, because this incident had happened in multiple places, we decided to run a program for all the staff that was, um, involved in their day to day. So we could do it like that and make sure that as a response to what happened, everybody gets some education. We have some quizzes, we have some tests, we have some community. We keep engaged and maybe. That would help. And yeah, they'll be more prepared in case it happens again.

CINDY COHN: Oh yeah. And this is such an old, old issue. You know, when we were doing the encryption fight in the nineties, we had stories of people in El Salvador and Guatemala where the office gets raided and the information gets in the hands of the government, whoever the opposition is, and then other people start disappearing and getting targeted too, because their identities are revealed in the information that gets seized. And that sounds like the very same pattern that you're still seeing.

HELEN ANDROMEDON: Yeah there's a lot to consider for that case. Uh, cloud saving, um, we have to see if there's somebody that can, there's somebody who can host their server. It's very, yeah, it's, it's interesting for that case.

CINDY COHN: Yeah. I think it's an ongoing issue and there are better tools than we had in the nineties, but people need to know about them and, and actually using them is not, it's not easy. It's, you, you have to actually think about it.

HELEN ANDROMEDON: Yeah, I, I don't know. I've seen a model that works, so if it's a tool, it's great. It's working well. I've seen it, uh, with I think the Tor project, because the, to project, has user communities. What it appears to be doing is engaging people with training, so doing safety trainings and then they get value from, from using your tool. because they get to have all this information, not only about your tool, but of safety. So that's a good model to build user communities and then get your tool used. I think this is also a problem.

CINDY COHN: Yeah. I mean, this is a, another traditional problem is that the trainers will come in and they'll do a training, but then nobody really is trained well enough to continue to use the tool.
And I see you, you know, building networks and building community and also having, you know, enough time for people to get familiar with and use these tools so that they won't just drop it after the training's over. It sounds like you're really thinking hard about that.

HELEN ANDROMEDON: Yeah. Um, yeah, I think that we have many opportunities and because the learning is so difficult to cultivate and we don't have the resources to make it long term. Um, so yes, you do risk having all the information forgotten. Yes.

JASON KELLEY: I wanna just quickly emphasize that some of the scenarios, Cindy, you've talked about, and Helen you just mentioned, I think a lot of: potential break-ins, harassment, kidnapping, and it's, it's really, it's awful, but I think this is one of the things that makes this kind of training so necessary. I know that this seems obvious to many people listening and, and to the folks here, but I think it's, it's really it. I. Just needs emphasized that these are serious issues. That, and that's why you can't make a one size fits all training because these are real problems that, you know, someone might not have to deal with in one country and they might have a regular problem with in another. Is there a kind of difference that you can just clarify about how you would train, for example, groups of women that are experiencing one thing when they, you know, need digital security advice or help versus let's say human rights defenders? Is the training completely different when you do that, or is it just really kind of emphasizing the same things about like protecting your privacy, protecting your data, using certain tools, things like that?

HELEN ANDROMEDON: Yeah. Jason, let me, let me first respond to your first comment about the tools. So one size fits all, obviously is wrong. Maybe get more people of diversity working on that tool and they'll give you their opinion because the development is a process. You don't just develop a tool - you have time to change, modify, test. Do I use that? Like if you had somebody like that in the room, they would tell you if you had two, that would be great because now you have two different points of evidence. And keep mixing. And then, um, I know it's like it's expensive. Like you have to do it one way and then get feedback, then do it another way. But I, I think just do more of that. Um, yeah. Um, how do I train? So the training isn't that different. There are some core concepts that we keep and then, so if it, if I had like five days, I would do like one or two days. The more technical, uh, concepts of digital safety, which everybody has to do, which is, look, this is my device, this is how it works, this is how I keep it safe. This is my account, this is how it works. This is how I keep it safe.
And then when you have more time, you can dive into the personas, let's say it's a journalist, so is there a resource for, and this is how then you pull a resource and then you show it is there a resource which identify specific tools developed for journalists? Oh, maybe there is, there is something that is like a panic button that one they need. So you then you start to put all these things together and in the remaining time you can kind of like hone into those differences.
Now for women, um, it would be … So if it's HRDs and it's mixed, I still would cover cyber harassment because it affects everyone. For women would, would be slightly different because maybe we could go into self-defense, we could go into how to deal, we could really hone into the finer points of responding to online harassment because for their their case, it's more likely because you did a threat model, it's more likely because of their agenda and because of the work that they do. So I think that would be how I would approach the two.

JASON KELLEY: And one, one quick thing that I just, I want to mention that you brought up earlier is, um, shared devices. There's a lot of, uh, solutionism in government, and especially right now with this sort of, assumption that if you just assume everyone has one device, if you just say everyone has their phone, everyone has their computer, you can, let's say, age verify people. You can say, well, kids who use this phone can't go to this website, and adults who use this other phone can go to this website. And this is a regular issue we've seen where there's not an awareness that people are buying secondhand devices a lot, people are sharing devices a lot.

HELEN ANDROMEDON: Yeah, absolutely. Shared devices is the assumption always. And then we do get a few people who have their own devices. So Jason, I just wanted to add one more factor that could be bad. Yeah. For the shared devices, because of the context, and the regions that I'm in, you have also the additional culture and religious norms, which sometimes makes it like you don't have liberty over your devices. So anybody at any one time, if they're your spouse or your parent, they can just take it from you, and demand that you let them in. So it's not necessarily that you could all have your own device, but the access to that device, it can be shared.

CINDY COHN: So as you look at the world of, kind of, tools that are available, where are the gaps? Where would you like to see better tools or different tools or tools at all, um, to help protect and empower the communities you work with?

HELEN ANDROMEDON: We need a solution for the internet shutdowns because, because sometimes it could have an, it could have health repercussions, you could have a need, a serious need, and you don't have access to the internet. So I don't know. We need to figure that one out. Um, the technology is there, as you mentioned earlier, before, but you know, it needs to be, like, more developed and tested. It would be nice to have technology that responds or gives victim advice. Now I've seen interventions. By case. Case by case. So many people are doing them now. Um, you, you know, you, you're right. They verify, then they help you with whatever. But that's a slow process.
Um, you're processing the information. It's very traumatic. So you need good advice. You need to stay calm, think through your options, and then make a plan, and then do the plan. So that's the kind of advice. Now I think there are apps because maybe I'm not using them or I don't, maybe that means they're not well known as of now.
Yeah. But that's technology I would like to see. Um, then also every, every, everything that is available. The good stuff. It's really good. It's really well written. It's getting better – more visuals, more videos, more human, um, more human like interaction, not that text. And mind you, I'm a huge fan of text, um, and like the GitHub text.
That's awesome. Um, but sometimes for just getting into the topic you need a different kind of, uh, ticket. So I don't know if we can invest in that, but the content is really good.
Practice would be nice. So we need practice. How do we get practice? That's a question I would leave to you. How do you practice a tool on your own? It's good for you, how do you practice it on your own? So it's things like that helping the, the person onboard, doing resources to help that transition. You want people to use it at scale.

JASON KELLEY: I wonder if you can talk a bit about that moment when you're training someone and you realize that they really get it. Maybe it's because it's fun, or maybe it's because they just sort of finally understand like, oh, that's how this works. Is that something, you know, I assume it's something you see a lot because you're clearly, you know, an experienced and successful teacher, but it's, it's just such a lovely moment when you're trying to teach someone

HELEN ANDROMEDON: when trying to teach someone something. Yeah, I mean, I can't speak for everybody, but I'll speak to myself. So there are some things that surprise me sitting in a class, in a workshop room, or reading a tutorial or watching how the internet works and reading about the cables, but also reading about electromagnetism. All those things were so different from, what were we talking about? Which is like how internet and civil society, all that stuff. But that thing, the science of it, the way it is, that should, for me, I think that it's enough because it's really great.
But then, um. So say we are, we are doing a session on how the internet works in relation to internet shutdowns. Is it enough to just talk about it? Are we jumping from problem to solution, or can we give some time? So that the person doesn't forget, can we give some time to explain the concept? Almost like moving their face away from the issue for a little bit and like, it's like a deception.
So let's talk about electromagnetism that you won't forget. Maybe you put two and two together about the cyber optic cables. Maybe you answer the correction, the, the right, uh, answer to a question in, at a talk. So it's, it's trying to make connections because we don't have that background. We don't have a tech background.
I just discovered Dungeons and Dragons at my age. So we don't have that tech liking tech, playing with it. We don't really have that, at least in my context. So get us there. Be sneaky, but get us there.

JASON KELLEY: You have to be a really good dungeon master. That's what I'm hearing. That's very good.

HELEN ANDROMEDON: yes.

CINDY COHN: I think that's wonderful and, and I agree with you about, like, bringing the joy, making it fun, and making it interesting on multiple levels, right?
You know, learning about the science as well as, you know, just how to do things that just can add a layer of connection for people that helps keep them engaged and keeps them in it. And also when stuff goes wrong, if you actually understand how it works under the hood, I think you're in a better position to decide what to do next too.
So you've gotta, you know, it not only makes it fun and interesting, it actually gives people a deeper level of understanding that can help 'em down the road.

HELEN ANDROMEDON: Yeah, I agree. Absolutely.

JASON KELLEY: Yeah, Helen, thanks so much for joining us – this has been really helpful and really fun.
Well, that was really fun and really useful for people I think, who are thinking about digital security and people who don't spend much time thinking about digital security, but maybe should start, um, something that she mentioned that, that, that you talked about, the Train the Trainer model, reminded me that we should mention our surveillance self-defense guides that, um, are available@ssd.ff.org.
That we talked about a little bit. They're a great resource as well as the Security Education companion website, which is security education companion.org.
Both of these are great things that came up and that people might want to check out.

CINDY COHN: Yeah, it's wonderful to hear someone like Helen, who's really out there in the field working with people, say that these guides help her. Uh, we try to be kind of the brain trust for people all over the world who are doing these trainings, but also make it easy if. If you're someone who's interested in learning how to do trainings, we have materials that'll help you get started. Um, and as, as we all know, we're in a time when more people are coming to us and other organizations seeking security help than ever before.

JASON KELLEY: Yeah, and unfortunately there's less resources now, so I think we, you know, in terms of funding, right, there's less resources in terms of funding. So it's important that people have access to these kinds of guides, and that was something that we talked about that kind of surprised me. Helen was really, I think, optimistic about the funding cuts, not obviously about them themselves, but about what the opportunities for growth could be because of them.

CINDY COHN: Yeah, I think this really is what resilience sounds like, right? You know, you get handed a situation in which you lose, you know, a lot of the funding support that you're gonna do, and she's used to pivoting and she pivots towards, you know, okay, these are the opportunities for us to grow, for us to, to build new baselines for the work that we do. And I really believe she's gonna do that. The attitude just shines through in the way that she approaches adversity.

JASON KELLEY: Yeah. Yeah. And I really loved, while we're thinking about the, the parts that we're gonna take away from this, I really loved the way she brought up the need for people to feel ownership of the online world. Now, she was talking about infrastructure specifically in that moment, but this is something that's come up quite a bit in our conversations with people.

CINDY COHN: Yeah, her framing of how important the internet is to people all around the world, you know, the work that our friends at Access now and others do with the Keep It On Coalition to try to make sure that the internet doesn't go down. She really gave a feeling for like just how vital and important the internet is, for people all over the world.

JASON KELLEY: Yeah. And even though, you know, some of these conversations were a little bleak in the sense of, you know, protecting yourself from potentially bad things, I was really struck by how she sort of makes it fun in the training and sort of thinking about, you know, how to get people to memorize things. She mentioned magnetism and fiber optics, and just like the science behind it. And it really made me, uh, think more carefully about how I'm gonna talk about certain aspects of security and, and privacy, because she really gets, I think, after years of training what sticks in people's mind.

CINDY COHN: I think that's just so important. I think that people like Helen are this really important kind of connective tissue between the people who are deep in the technology and the people who need it. And you know that this is its own skill and she just, she embodies it. And of course, the joy she brings really makes it alive.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelly.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

 

Josh Richman

Despite Supreme Court Setback, EFF Fights On Against Online Age Mandates

1 month 1 week ago

The Supreme Court’s recent decision in Free Speech Coalition v. Paxton did not end the legal debate over age-verification mandates for websites. Instead, it’s a limited decision: the court’s legal reasoning only applies to age restrictions on sexual materials that minors do not have a legal right to access. Although the ruling reverses decades of First Amendment protections for adults to access lawful speech online, the decision does not allow states or the federal government to impose broader age-verification mandates on social media, general audience websites, or app stores.

At EFF, we continue to fight age-verification mandates in the many other contexts in which we see them throughout the country and the world. These “age gates” remain a threat to the free speech and privacy rights of both adults and minors.

Importantly, the Supreme Court’s decision does not approve of age gates when they are imposed on speech that is legal for minors and adults.

The court’s legal reasoning in Free Speech Coalition v. Paxton depends in all relevant parts on the Texas law only blocking minors’ access to speech to which they had no First Amendment right to access in the first place—what has been known since 1968 as “harmful to minors” sexual material. Although laws that limit access to certain subject matters are typically required to survive “strict scrutiny,” the Texas law was subject instead to the less demanding “intermediate scrutiny” only because the law was denying minors access to this speech that was unprotected for them. The Court acknowledged that having to prove age would create an obstacle for adults to access speech that is protected for them. But this obstacle was merely “incidental” to the lawful restriction on minors’ access. And “incidental” restrictions on protected speech need only survive intermediate scrutiny.

To be clear, we do not agree with this result, and vigorously fought against it. The Court wrongly downplayed the very real and significant burdens that age verification places on adults. And we disagree with numerous other doctrinal aspects of the Court’s decision. The court had previously recognized that age-verification schemes significantly burden adult’s First Amendment rights and had protected adults’ constitutional rights. So Paxton is a significant loss of internet users’ free speech rights and a marked retreat from the court’s protections for online speech.

The decision does not allow states or the federal government to impose broader age-verification mandates

But the decision is limited to the specific context in which the law seeks to restrict access to sexual materials. The Texas law avoided strict scrutiny only because it directly targeted speech that is unprotected as to minors. You can see this throughout the opinion:

  • The foundation of the Court’s decision was the history, tradition, and precedent that allows states to “prevent children from accessing speech that is obscene to children, rather than a more generalized concern for child welfare.
  • The Court’s entire ruling rested on its finding that “no person – adult or child –has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.”
  • The Court explained that “because the First Amendment permits States to prohibit minors from accessing speech that is obscene to them, it likewise permits States to employ the ordinary and appropriate means of enforcing such a prohibition.” The permissibility of the age verification requirement was thus dependent on the unprotected nature of the speech.
  • The only reason the law could be justified without reference to protected speech, a requirement for a content-neutral law subject to only intermediate scrutiny, is that it did not “regulate the content of protected speech” either “‘on its face’ or in its justification.” As the Court explained, “where the speech in question is unprotected, States may impose “restrictions” based on “content” without triggering strict scrutiny.”
  • Intermediate scrutiny was applied only because “[a]ny burden experienced by adults is therefore only incidental to the statute's regulation of activity that is not protected by the First Amendment.”
  • But strict scrutiny remains “the standard for reviewing the direct targeting of fully protected speech.”

Only one sentence in Free Speech Coalition v. Paxton addressing the restriction of First Amendment rights is not cabined by the language of unprotected harmful to minors speech. The Court wrote: “And, the statute does not ban adults from accessing this material; it simply requires them to verify their age before accessing it on a covered website.” But that sentence was entirely surrounded by and necessarily referred to the limited situation of a law burdening only access to harmful to minors sexual speech.

We and the others fighting online age restrictions still have our work cut out for us. The momentum to widely adopt and normalize online age restrictions is strong. But Free Speech Coalition v. Paxton did not approve of age gates when they are imposed on speech that adults and minors have a legal right to access. And EFF will continue to fight for all internet users’ rights to speak and receive information online.

David Greene
Checked
1 hour 33 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed