EFF to Court: The Supreme Court Must Rein in Expansive Secondary Copyright Liability

3 days 22 hours ago

If the Supreme Court doesn’t reverse a lower court’s ruling, internet service providers (ISPs) could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life. EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an amicus brief urging the Court to reverse the decision.

The Stakes: Turning ISPs into Copyright Police

Among other things, the Supreme Court approving the appeals court’s findings will radically change the amount of risk your ISP takes on if a customer infringes on copyright, forcing the ISP to terminate access to the internet for those users accused of copyright infringement—and everyone else who uses that internet connection.

This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service.

The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). The lower court said that providing a service that could be used for infringement is enough to create liability when a customer infringes.

In the Patent Act, where Congress has explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is incapable of substantial non-infringing use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework. Yet under the Fourth Circuit’s rule, ISPs could face billion-dollar damages if they fail to terminate users on the basis of even flimsy or automated infringement claims.

Our Argument: Apply Clear Rules from the Patent Act, Not Confusing Judge-Made Tests

Our brief urges the Court to do what it has done in the past: look to patent law to define the limits of secondary liability in copyright. That means contributory infringement must require more than a “material contribution” by the service provider—it should apply only when a product or service is especially designed for infringement and lacks substantial non-infringing uses.

The Human Cost: Losing Internet Access Hurts Everyone

The Fourth Circuit’s rule threatens devastating consequences for the public. Terminating an ISP account doesn’t just affect a person accused of unauthorized file sharing—it cuts off entire households, schools, libraries, or businesses that share an internet connection.

  • Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.
  • Universities, hospitals, and local governments could see internet access for whole communities disrupted.
  • Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.

With more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect once cut off. And given how essential internet access is for education, employment, healthcare, and civic participation, the consequences of termination are severe and disproportionate.

What’s Next

The Supreme Court has an opportunity to correct course. We’re asking the Court to reject the Fourth Circuit’s unfounded “material contribution” test, reaffirm that patent law provides the right framework for secondary liability, and make clear that the Constitution requires copyright to serve the public good. The Court should ensure that copyright enforcement doesn’t jeopardize the internet access on which participation in modern life depends.

We’ll be watching closely as the Court considers this case. In the meantime, you can read our amicus brief here.

Betty Gedlu

San Francisco Gets An Invasive Billionaire-Bought Surveillance HQ

4 days 6 hours ago

San Francisco billionaire Chris Larsen once again has wielded his wallet to keep city residents under the eye of all-seeing police surveillance. 

The San Francisco Police Commission, the Board of Supervisors, and Mayor Daniel Lurie have signed off on Larsen’s $9.4 million gift of a new Real-Time Investigations Center. The plan involves moving the city’s existing police tech hub from the public Hall of Justice not to the city’s brand-new police headquarters but instead to a sublet in the Financial District building of Ripple Labs, Larsen’s crypto-transfer company. Although the city reportedly won’t be paying for the space, the lease reportedly cost Ripple $2.3 million and will last until December 2026. 

The deal will also include a $7.25 million gift from the San Francisco Police Community Foundation that Larsen created. Police foundations are semi-public fundraising arms of police departments that allow them to buy technology and gear that the city will not give them money for.  

In Los Angeles, the city’s police foundation got $178,000 from the company Target to pay for the services of the data analytics company Palantir to use for predictive policing. In Atlanta, the city’s police foundation funds a massive surveillance apparatus as well as the much-maligned Cop City training complex. (Despite police foundations’ insistence that they are not public entities and therefore do not need to be transparent or answer public records requests, a judge recently ordered the Atlanta Police Foundation to release documentation related to Cop City.) 

A police foundation in San Francisco brings the same concerns: that an unaccountable and untransparent fundraising arm shmoozing with corporations and billionaires would fund unpopular surveillance measures without having to reveal much to the public.  

Larsen was one of the deep pockets behind last year’s Proposition E, a ballot measure to supercharge surveillance in the city. The measure usurped the city’s 2019 surveillance transparency and accountability ordinance, which had required the SFPD to get the elected Board of Supervisors’ approval before buying and using new surveillance technology. This common-sense democratic hurdle was, apparently, a bridge too far for the SFPD and for Larsen.  

We’re no fans of real-time crime centers (RTCCs), as they’re often called elsewhere, to start with. They’re basically control rooms that pull together all feeds from a vast warrantless digital dragnet, often including automated license plate readers, fixed cameras, officers’ body-worn cameras, drones, and other sources. It’s a means of consolidating constant surveillance of the entire population, tracking everyone wherever they go and whatever they do – worrisome at any time, but especially in a time of rising authoritarianism.  

Think of what this data could do if it got into federal hands; imagine how vulnerable city residents would be subject to harassment if every move they made was centralized and recorded downtown. But you don’t have to imagine, because SFPD already has been caught sharing automated license plate reader data with out-of-state law enforcement agencies assisting in federal immigration investigations

We’re especially opposed to RTCCs using live feeds from non-city surveillance cameras to push that panopticon’s boundaries even wider, as San Francisco’s does. Those semi-private networks of some 15,000 cameras, already abused by SFPD to surveil lawful protests against police violence, were funded in part by – you guessed it – Chris Larsen

These technologies could potentially endanger San Franciscans by directing armed police at them due to reliance on a faulty algorithm or by putting already-marginalized communities at further risk of overpolicing and surveillance. But studies find that these technologies just don’t work. If the goal is to stop crime before it happens, to spare someone the hardship and the trauma of getting robbed or hurt, cameras clearly do not accomplish this. There’s plenty of footage of crime occurring that belies the idea that surveillance is an effective deterrent, and although police often look to technology as a silver bullet to fight crime, evidence suggests that it does little to alter the historic ebbs and flows of criminal activity. 

Yet now this unelected billionaire – who already helped gut police accountability and transparency rules and helped fund sketchy surveillance of people exercising their First Amendment rights – wants to bankroll, expand, and host the police’s tech nerve center. 

Policing must be a public function so that residents can control - and demand accountability and transparency from - those who serve and protect but also surveil and track us all. Being financially beholden to private interests erodes the community’s trust and control and can leave the public high and dry if a billionaire’s whims change or conflict with the will of the people. Chris Larsen could have tried to address the root causes of crime that affect our community; instead, he exercises his bank account's muscle to decide that surveillance is best for San Franciscans with less in their wallets. 

Elected officials should have said “thanks but no thanks” to Larsen and ensured that the San Francisco Police Department remained under the complete control and financial auspices of nobody except the people of San Francisco. Rich people should not be allowed to fund the further degradation of our privacy as we go about our lives in our city’s public places. Residents should carefully watch what comes next to decide for themselves whether a false sense of security is worth living under constant, all-seeing, billionaire-bankrolled surveillance. 

Josh Richman

Rayhunter: What We Have Found So Far

4 days 7 hours ago

A little over a year ago we released Rayhunter, our open source tool designed to detect cell-site simulators. We’ve been blown away by the level of community engagement on this project. It has been installed on thousands of devices (or so we estimate, we don’t actually know since Rayhunter doesn’t have any telemetry!). We have received dozens of packet captures, hundreds of improvements, both minor and major, documentation fixes, and bug reports from our open source community. This project is a testament to the power and impact of open source and community driven counter-surveillance.  

If this is your first time hearing about Rayhunter, you can read our announcement blog post here. Or if you prefer, you can watch our DEF CON talk. In short, Rayhunter is an open source Linux program that runs on a variety of mobile hotspots (dedicated devices that use a cellular connection to give you Wi-Fi). Rayhunter’s job is to look for cell-site simulators (CSS), a tool police use to locate or identify people's cell phones, also known as IMSI catchers or Stingrays. Rayhunter analyzes the “handshakes” between your Rayhunter device and the cell towers it is connected to for behaviors consistent with that of a CSS. When it finds potential evidence of a CSS it alerts the user with an indicator on the screen and potentially a push notification to their phone.  

Understanding if CSS are being used to spy on protests is one of the main goals of the Rayhunter project. Thanks to members of our community bringing Rayhunter to dozens of protests, we are starting to get a picture of how CSS are currently being used in the US. So far Rayhunter has not turned up any evidence of cell-site simulators being used to spy on protests in the US — though we have found them in use elsewhere.  

So far Rayhunter has not turned up any evidence of cell-site simulators being used to spy on protests in the US.  

There are a couple of caveats here. First, it’s often impossible to prove a negative. Maybe Rayhunter just hasn’t been at protests where CSS have been present. Maybe our detection signatures aren’t picking up the techniques used by US law enforcement. But we’ve received reports from a lot of protests, including pro-Palestine protests, protests in Washington DC and Los Angeles, as well as the ‘No Kings’ and ‘50501’ protests all over the country. So far, we haven’t seen evidence of CSS use at any of them.  

A big part of the reason for the lack of CSS at protests could be that some courts have required a warrant for their use, and even law enforcement agencies not bound by these rulings have policies that require police to get a warrant. CSS are also costly to buy and use, requiring trained personnel to use nearly one million dollars worth of equipment.  

The fact is police also have potentially easier to use tools available. If the goal of using a CSS at a protest is to find out who was at the protest, police could use tools such as:  

  • License plate readers to track the vehicles arriving and leaving at the protest. 
  • Location data brokers, such as Locate X and Fog Data Science, to track the phones of protestors by their mobile advertising IDs (MAID).
  • Cellebrite and other forensic extraction tools to download all the data from phones of arrested protestors if they are able to unlock those phones.  
  • Geofence warrants, which require internet companies like Google to disclose the identifiers of devices within a given location at a given time.
  • Facial recognition such as Clearview AI to identify all present via public or private databases of peoples faces.
  • Tower dumps from phone companies, which, similar to geofence warrants, require phone companies to turn over a list of all the phones connected to a certain tower at a certain time.  

We think, due to the lack of evidence of CSS being used, protestors can worry less about CSS and more about these other techniques. Luckily, the actions one should take to protect themselves are largely the same: 

We feel pretty good about Rayhunter’s detection engine, though there could still be things we are missing. Some of our confidence in Rayhunter’s detection engine comes from the research we have done into how CSS work. But the majority of our confidence comes from testing Rayhunter against a commercial cell-site simulator thanks to our friends at Cape. Rayhunter detected every attack run by the commercial CSS.  

Where Rayhunter Has Detected Likely Surveillance

Rayhunter users have found potential evidence of CSS being used in the wild, though not at protests. One of the most interesting examples that triggered multiple detections and even inspired us to write some new detection rules was at a cruise port in the Turks and Caicos Islands. The person who captured this data put the packet captures online for other researchers to review

Rayhunter users have detected likely CSS use in the US as well. We have received reports from Chicago and New York where our “IMSI Sent without authentication” signature was triggered multiple times over the course of a couple hours and then stopped. Neither report was in the vicinity of a protest. We feel fairly confident that these reports are indicative of a CSS being present, though we don’t have any secondary evidence to back them up. 

We have received other reports that have triggered our CSS detection signatures, but the above examples are the ones we feel most confident about.  

We encourage people to keep using Rayhunter and continue bringing it to protests. Law enforcement trends can change over time and it is possible that some cities are using them more often than others (for example Fontana, California reportedly used their CSS over 300 times in two years). We also know that ICE still uses CSS and has recently renewed their contracts. Interestingly, in January, the FBI requested a warrant from the Foreign Intelligence Surveillance Court to use what was likely a CSS and was rejected. This was the first time the FBI has sought a warrant to use a CSS using the Foreign Intelligence Surveillance Act since 2015, when the Justice Department began requiring a warrant for their use. If police start using CSS to spy on protests we want to know.

There is still a lot we want to accomplish with Rayhunter, we have some future plans for the project that we are very excited to share with you in the near future, but the biggest thing we need right now is more testing outside of the United States.  

Taking Rayhunter International  

We are interested in getting Rayhunter data from every country to help us understand the global use of CSS and to refine our signatures. Just because CSS don't appear to be used to spy on protests in the US right now doesn't mean that is true everywhere. We have also seen that some signatures that work in the US are prone to false positives elsewhere (such as our 2G signature in countries that still have active 2G networks). The first device supported by Rayhunter, the Orbic hotspot, was US only, so we have very little international data. But we now have support for multiple devices! If you are interested in Rayhunter, but can’t find a device that works in your country, let us know. We recommend you consult with an attorney in your country to determine whether running Rayhunter is likely to be legally risky or outlawed in your jurisdiction.

Related Cases: Carpenter v. United States
Cooper Quintin

Podcast Episode: Building and Preserving the Library of Everything

4 days 15 hours ago

All this season, “How to Fix the Internet” has been focusing on the tools and technology of freedom – and one of the most important tools of freedom is a library. Access to knowledge not only creates an informed populace that democracy requires, but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed of – so long as that knowledge is allowed to flow freely.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F070870d9-d14a-4346-9cdd-a120a18d3475%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

    

(You can also find this episode on the Internet Archive and on YouTube.) 

A passionate advocate for public internet access and a successful entrepreneur, Brewster Kahle has spent his life intent on a singular focus: providing universal access to all knowledge. The Internet Archive, which he founded in 1996, now preserves 99+ petabytes of data - the books, Web pages, music, television, government information, and software of our cultural heritage – and works with more than 400 library and university partners to create a digital library that’s accessible to all. The Archive is known for the Wayback Machine, which lets users search the history of almost one trillion web pages. But it also archives images, software, video and audio recordings, documents, and it contains dozens of resources and projects that fill a variety of gaps in cultural, political, and historical knowledge. Kahle joins EFF’s Cindy Cohn and Jason Kelley to discuss how the free flow of knowledge makes all of us more free. 

In this episode you’ll learn about:

  • The role AI plays in digitizing, preserving, and easing access to all kinds of information
  • How EFF helped the Internet Archive fight off the government’s demand for information about library patrons
  • The importance of building a decentralized, distributed web to finding and preserving information for all
  • Why building revolutionary, world-class libraries like the Internet Archive requires not only money and technology, but also people willing to dedicate their lives to the work
  • How nonprofits are crucial to filling societal gaps left by businesses, governments, and academia 

Brewster Kahle is the founder and digital librarian of the Internet Archive, which is among the world’s largest libraries and serves millions of people each day. After studying AI at and graduating from the Massachusetts Institute of Technology in 1982, Kahle helped launch the company Thinking Machines, a parallel supercomputer maker. In 1989, he helped create the internet's first publishing system called Wide Area Information Server (WAIS); WAIS Inc. was later sold to AOL. In 1996, Kahle co-founded Alexa Internet, which helps catalog the Web, selling it to Amazon.com in 1999. He is a former member of EFF’s Board of Directors. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

BREWSTER KAHLE: I think we should start making some better decisions, a little bit more informed, a little better communication with not only people that are around the world and finding the right people we should be talking to, but also, well, standing on the shoulders of giants. I mean, we can then go and learn from all the things that people have learned in the past. It's pretty straightforward what we're trying to do here. It's just build a library.

CINDY COHN: That's Internet Archive founder Brewster Kahle on what life could look like we all got to experience his dream of universal access to all human knowledge.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation

JASON KELLEY: And I'm Jason Kelley - EFF's activism director. And this is our podcast How to Fix the Internet.

CINDY COHN: This show is about what the world could look like if we get things right online - we hear from activists, computer engineers, thinkers, artists and today, a librarian, about their visions for a better digital future that we can all work towards.

JASON KELLEY: And our guest today is someone who has been actively making the internet a better place for several decades now.

CINDY COHN: Brewster Kahle is an early internet pioneer, and a longtime advocate for digitization. He’s a computer engineer but also a digital librarian, and he is of course best known as the founder of the Internet Archive and the Wayback Machine. EFF and the Archive are close allies and friends, and Brewster himself was a member of EFF’s Board of Directors for many years. I’m proud to say that the Archive is also a client of EFF, including most recently when we served as part of the legal team trying to protect true library lending of digital materials like ebooks and audiobooks.

JASON KELLEY: All season we’ve been focusing on the tools and technologies of freedom – and one of the most important tools of freedom is a library.
We started off our conversation by getting his take on the role that AI should play in his vision of a universally accessible library.

BREWSTER KAHLE: AI is absolutely critical and actually has been used for, well, a long period of time. You just think of, how does the magic of Google search happen, where you can just type a few words and get 10 links and several of them are actually really quite relevant. How do you do that? Those of us old enough to remember just keyword searching, that didn't work very well.
So it's going and using all this other information, metadata from other websites, but also learning from people, and machine learning at scale, that we've been able to make such progress.
Now there's the large language models, the generative AI, which is also absolutely fantastic. So we are digitizing obscure newsletters from theological missions in distant parts of the world. We are digitizing agricultural records and from over decades of the 20th century.
And these materials are absolutely relevant now with climate change in our new environments because, well, things are moving. So the pests that used to be only in Mexico are now in Louisiana and Texas. It's completely relevant to go and learn from these, but it's not gonna be based on people going and doing keyword search and finding that newsletter and, and learning from it. It's gonna be based on these augmentations, but take all of these materials and try to make it useful and accessible to a generation that's used to talking to machines.

CINDY COHN: Yeah, I think that that's a really important thing. One of my favorite insights about AI is that it's a very different user interface. It's a way to have a conversational access to information. And I think AI represents one of those other shifts about how people think about accessing information. There's a lot of side effects of AI and we definitely have to be serious about those. But this shift can really help people learn better and find what they're looking for, but also find things that maybe they didn't think they were looking for.

BREWSTER KAHLE: If we do it well, if we do it with public AI that is respectful, the opportunity for engaging people and in a more deep way to be able to have them get to literature that has been packed away, and we've spent billions of dollars in the library system over centuries going and building these collections that are now going to be accessible, not just to the reference librarian, not just to researchers, but to kind of anybody.

JASON KELLEY: Can I dig into this backstory of yours a little bit? Because you know, a lot of people may know how you ended up building the Internet Archive, but I don't think they know enough. I'd like to get more people to sort of have a model in tech for what they can do if they're successful. And you were, if I understand it right, you were one of the early successful internet stories.
You sold a company or two in the nineties and you could have probably quit then and instead you ended up building the Internet Archive. Did you have this moment of deciding to do this and how did you end up in library school in the first place?

BREWSTER KAHLE: So I'm a little unusual in that I, I've only had one idea in my life, and so back in college in 1980 a friend posed, okay, you're an idealist. Yes. And a technologist. Yes. Paint a portrait that's better with your technology. It turned out that was an extremely difficult question to answer.
We were very good about complaining about things. You know, that was Cold War Times and Nicaragua and El Salvador, and there's lots of things to complain about, but it was like. What would be better? So I only came up with two ideas. one was protect people's privacy, even though they were going to throw it away if they were given the chance.
And the other was build the library of everything, the building of the library of everything, the digital library of Alexandria seemed too obvious. So I tried to work on the privacy one, but I couldn't make chips to encrypt voice conversations cheap enough to help the people I wanted to, but I learned how to make chips.
But then that got me engaged with the artificial intelligence lab at MIT and Danny Hillis and Marvin Minsky, they had this idea of building a thinking machine and to go and build a computer that was large enough to go and search everything. And that seemed absolutely critical.
So I helped work on that. Founded a company, Thinking Machines. That worked pretty well. So we got the massively parallel computers. We got the first search engine on the internet, then spun off a company to go and try to get publishers online called WAIS Incorporated. It came before the web, it was the first publishing system.
And so these were all steps in the path of trying to get to the library. So once we had publishers online, we also needed open source software. The free and open source software movement is absolutely critical to the whole story of how this whole thing came about, and open protocols, which was not the way people thought of things. They would go and make them proprietary and sue people and license things, but the internet world had this concept of how to share that ran very, very well. I wasn't central in the ARPANET to the internet conversation. But I did have quite a bit to do with some of the free and open source software, the protocol development, the origins of the web.
And once we had publishers, then, onboard, then I could turn my attention to building the library in 1996, so that's 28 years ago, something like that. And so we then said, okay, now we can build the library. What does that make up of? And we said, well, let's start with the web. Right? The most fragile of media.
I mean, Tim's system, Tim Berners-Lee's system, was very easy to implement, which was kind of great and one of the keys for his success, but it had some really, basically broken parts of it. You think of publishers and they would go and make copies and sell them to individuals or libraries, and they would stay alive much longer than the publishers.
But the web, there's only one copy and it's only on one machine. And so if they change that, then it's gone. So you're asking publishers to be librarians, which is a really bad idea. And so we thought, okay, why don't we go and make a copy of everything that was on the web. Every page from every website every two months.
And turns out you could do that. That was my Altavista moment when I actually went to see Altavista. It was the big search engine before Google and it was the size of two Coke machines, and it was kind of wild to go and look - that's the whole web! So the idea that you could go and gather it all back up again, uh, was demonstrated by Altavista and the Internet Archive continued on with other media type after media type, after media type.

JASON KELLEY: I heard you talk about the importance of privacy to you, and I know Cindy's gonna wanna dig into that a little bit with some of the work that EFF and the Archive have done together.

CINDY COHN: Yeah, for sure. One of the things I think, you know, your commitment to privacy is something that I think is very, very important to you and often kind of gets hidden because the, you know, the archive is really important. But, you know, we were able to stand up together against national security letters, you know, long before some of the bigger cases that came later and I wanted to, you know, when you reached out to us and said, look, we've gotten this national security letter, we wanna fight back. Like, it was obvious to you that we needed to push back. And I wanna hear you talk about that a little bit.

BREWSTER KAHLE: Oh, this is a hero day. This is a hero moment for EFF and its own, you know, I, okay.

CINDY COHN: Well, and the Archive, we did it together.

BREWSTER KAHLE: Well, no, we just got the damn letter. You saved our butts. Okay. So how this thing worked was in 2001,they passed this terrible law, the Patriot Act, and they basically made any government official almost be able to ask any organization and be able to get anything they wanted and they had a gag order. So not only could they just get any information, say on patrons’ reading habits in a library, they could make it so that you can't tell anybody about it.
So I got sat down one day and Kurt Opsahl from EFF said, this isn't your best day. You just got a letter demanding information about a patron of the Internet Archive. I said, they can't do that. He said, yeah, they can. And I said, okay, well this doesn't make any sense. I mean, the librarians have a long history of dealing with people being surveilled on what it is they read and then rounded up and bad things happen to them, right? This is, this is something we know how that movie plays out.
So I said, Kurt, what, what can we do? And he said, you have to supply the data. I said, what if we don't? And he said, jail. That wasn't my favorite sentence. So is there anything else we can do? And he said, well, you can sue the United States government. (laughter)
OH! Well I didn't know even know whether I could bring this up with my board. I mean, remember there's a gag order. So there was just a need to know to be able to find out from the engineers what it is we had, what we didn't have. And fortunately we never had very much information. 'cause we don't keep it, we don't keep IP addresses if we possibly can. We didn't have that much, but we wanted to push back. And then how do you do that? And if it weren't for the EFF, and then EFF got the ACLU involved on a pro bono basis, I would never have been able to pull it off! I would have to have answered questions to the finance division of how, why are we spending all this money on lawyers?
The gag order made it so absolutely critical for EFF to exist, and to be ready and willing and funded enough to take on a court case against the United States government without, you know, having to go into a fundraising round.
But because of you, all of you listeners out there donating to EFF, having that piggy bank made it so that they could spring to the defense of the Internet Archive. The great thing about this was that after this lawsuit was launched, the government wanted out of this lawsuit as fast as possible.
They didn't want to go and have a library going and getting a court case to take their little precious toy of this Patriot Act, National Security letters away from them. So they wanted out, but we wouldn't let them. We wanted to be able to talk about it. They had to go and release the gag order. And I think we're only one or two or three organizations that have ever talked publicly about the hundreds of thousands, if not millions, of national security letters because we had EFF support.

CINDY COHN: Oh, thank you Brewster. That's very sweet. But it was a great honor to get to do this. And in hearing you talk about this future, I just wanna pull out a few of the threads. One is privacy and how important that is for access for information. Some people think of that as a different category, right? And it's not. It's part and parcel of giving people access to information.
I also heard the open source community and open protocols and making sure that people can, you know, crawl the web and do things with websites that might be different than the original creator wanted, but are still useful to society.
The other thing that you mentioned that I think it's important to lift up as well is, you know, when we're talking about AI systems, you're talking about public AI, largely. You're talking about things that similarly are not controlled by just one company, but are available so that the public really has access not only to the information, but to the tools that let them build the next thing.

BREWSTER KAHLE: Yes, the big thing I think I may have gotten wrong starting this whole project in 1980 was the relaxation of the antitrust laws in the United States, that we now have these monster organizations that are not only just dominating a country's telecom or publishing systems or academic access, but it's worldwide now.
So we have these behemoth companies. That doesn't work very well. We want a game with many winners. We want that level playing field. We wanna make it so that new innovators can come along and, you know, try it out, make it go. In the early web, we had this, we watched sort of the popularity and the movement of popularity. And so you could start out with a small idea and it could become quite popular without having to go through the gatekeepers. And that was different from when I was growing up. I mean, if you had a new idea for a kid's toy, trying to get that on the shelves in a bunch of toy stores was almost impossible.
So the idea of the web and the internet made it so that good ideas could surface and grow, and that can work as long as you don't allow people to be gatekeepers.
We really need a mechanism for people to be able to grow, have some respect, some trust. If we really decrease the amount of trust, which is kind of, there's a bonfire of trust right now, then a lot of these systems are gonna be highly friction-full.
And how do we go and make it so that, you know, we have people that are doing worthwhile projects, not exploiting every piece of surveillance that they have access to. And how do we build that actually into the architecture of the web?

CINDY COHN: That leads, I think, directly into the kind of work that the archive has done about championing the distributed web, the D-web work. And you've done a real lot of work to kind of create a space for a distributed web, a better web. And I want you to tell me a little bit about, you know, how does that fit into your picture of the future?

BREWSTER KAHLE: The wonderful thing about the internet still is that it can be changed. It's still built by people. They may be in corporations, but you can still make a big dent and, there were a couple “aha” moments for me in, in trying to, like, why do we build a better web? Right? what's the foundational parts that we need to be able to do that?
And we ended up with this centralization, not only of all the servers being in these colos that are operated by other companies and a cloud-based thing, other people own everything, that you can't go and just take your computer on your desk and be a first class internet thing. That used to be possible with Gopher and Waze and the early web. So we lost some of those things, but we can get them back.
Jason Scott at the Internet Archive, working with volunteers all over, made emulators of the early computers like IBM PCs and Macintosh and these old computers, Commodore 64, Atari machines, and they would run in JavaScript in your browser, so you could click and go and download an IBM PC and it boots in your browser and it uses the Internet Archive as a giant floppy drive to run your favorite game from 20 years ago. The cool thing about that for me, yes, I could get to play all my old games, it was kind of great, but we also had this ability to run a full on computer in your browser, so you didn't even have to download and install something.
So you could go and be a computer on the internet, not just a consumer, a reader. You could actually be a writer, you could be a publisher, you could, you could do activities, you could, so that was fantastic. And then another big change was the protocols of the browsers change to allow peer-to-peer interactions. That's how you get, you know, Google Meet or you get these video things that are going peer to peer where there's no central authority going in, interrupting your video streams or whatever.
So, okay, with these tools in hand now, then we could try to realize part of the dream that a lot of us had originally, and even Tim Burners Lee, of building a decentralized web. Could you make a web such that your website is not owned and controlled on some computer someplace, but actually exists everywhere and nowhere, kind of a peer-to-peer backend for the web.
Could you make it so that if you run a club, that you could do a WordPress-like website that would then not live anywhere, but as readers were reading it, they would also serve it. And there would be libraries that would be able to go and archive it as a living object, not as just snapshots of pages. That became possible. It turns out it's still very hard, and the Internet Archive started pulling together people, doing these summits and these different conferences to get discussions around this and people are running with it.

CINDY COHN: Yeah, and so I love this because I know so many people who go to the archive to play Oregon Trail, right? And I love it when I get a chance to say, you know, this isn't just a game, right? This is a way of thinking that is reflected in this. I kind of love that, you know, ‘you died with dysentery’ becomes an entryway into a whole other way of thinking about the web.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. And also, if you can’t make it in person to this year’s EFF awards where we celebrate the people working towards the better future we all care so much about, you can watch the whole event at eff.org/awards.
We also wanted to share that our friend Cory Doctorow, has a new podcast, have a listen to this:

WHO BROKE THE INTERNET TRAILER: How did the internet go from this? You could actually find what you were looking for right away, to this, I feel I can inhale. Spoiler alert, it was not an accident. I'm Cory Doctorow, host of Who Broke the Internet from CBC's Understood. In this four part series, I'm gonna tell you why the internet sucks now, whose fault it is and my plan to fix it. Find who broke the internet on whatever terrible app you get your podcasts.

JASON KELLEY: And now back to our conversation with Brewster Kahle.
The fact that you do things like archive these old games is something that I think a lot of people don't know. There are just so many projects that the internet archive does and it is interesting to hear how they're sort of all building towards this better future that is sort of built, like, sort of makes up the bones of the work that you do. Can you talk about any of the other projects that you are particularly sort of proud of that maybe other people haven't heard about?

BREWSTER KAHLE: Yeah, and I really wanna apologize. If you go to archive.org, it is daunting. Most people find things to read in the Internet Archive or see in the internet archive, mostly by going to search engines, or Wikipedia. For instance, we really dedicated ourselves to try to help reinforce Wikipedia. We started archiving all of the outbound links. And we figured out how to work with the communities to allow us to fix those broken links. So we've now fixed 22 million broken links in Wikipedia, 10,000 a day get now added to go back to the Wayback Machine.
Also, there are about two million books that are linked straight into, if you click on it, it goes right to the right page so you can go and see the citation. Not only is this important for homework, people that are after hours trying to cram for their, uh, for their homework, um, but it's also important for Wikipedians because, um, links in Wikipedia that go to someplace you can actually cite is a link that works, it gets more weight.
And if we're going to have all the literature, the scholarly literature and the book literature available in Wikipedia, it needs to be clickable. And you can't click your way into a Overdrive borrowed book from your library. You have to be able to do this from, something like the Internet Archive. So Wikipedia, reinforcing Wikipedia.
Another is television. We've been archiving television. 24 hours a day since the year 2000. Russian, Chinese, Japanese, Iraqi, Al Jazeera, BBC, CNN, ABC, Fox, 24 hours a day, DVD quality. And not all of it is available but the US television news, you can search and find things. And we're also doing summarizations now, so you can start to understand – in English – what is Russian State television telling the Russians? So we can start to get perspectives. Or look inside other people's bubbles to be able to get an idea of what's going on. Or a macroscope ability to step back and get the bigger picture. That's what libraries are for, is to go and use these materials in new and different ways that weren't the way that the publishers originally intended.
Other things. We've digitizing about 3000 books a day. So that's going along well. Then we are doing Democracy’s Library. Democracy's Library, I think is a cool one. So democracies need an educated populace. So they tend to publish openly. Authoritarian governments and corporations don't care about having an educated populace. That's not their goal. They have other goals, um, but democracies want things to be openly available.
But it turns out that even though the United States, for instance, and all democracies publish openly, most of those materials are not available publicly. They may be available in some high priced database system of somebody or other. But mostly they're just not available at all.
So we launched the Democracy's Library Project to go and take all of the published works at the federal level, the provincial state level, and municipal levels, and make that all available in bulk and in services so that other people could also go and build new services on this. We launched it with Canada and the United States. The Canadians are kicking the United States's butt. I mean, they're doing so great. So Internet Archive Canada, working with University of Toronto, and universities all over, have already digitized all of the federal print materials, and by working with the national library there have archived the government websites in Canada.
In the United States we've been archiving, with the help of many others, including historically with the Library of Congress, and National Archives to go and collect all of the web pages and services and data sets from all of the United States Federal websites from before and after every presidential election. It's called the End of Term Crawl, and this has been going on since 2008, and we've gotten into a lot of news recently because this administration has decided to take a lot of materials off the web. And again, asking a publisher, whether it's a government or commercial publisher or a social media publisher, to go and be their own archive or their own library is a bad idea. Don't trust a corporation to do a library's job, was what one headline said.
So we've been archiving all of these materials and making them available. Now, can we weave them back into the web with the right URLs? No, not yet. That's up to the browser companies and also some of the standards organizations. But it's, at least it's there and you can go to the Wayback Machine to find it.
So the Internet Archive is about the 200th most popular website.
We get millions of people a day coming to the website, and we get about 6 million people coming and using the internet archives resources that we don't even, they don't even come to the website. So it's just woven into the fabric of the web. So people say, oh, I've never heard of that. Never used it. It's like you probably have. It’s just part of how the internet works, it's plumbing.
So those are the aspects of the Internet Archive that are currently going on. We have people coming in all the time saying. Now, but are you doing this? And I said, no, but you can and we can be infrastructure for you. I think of the Internet Archive as infrastructure for obsessives. So the people that say, I really need this to persist to the next generation. We say, great, what do you need? How do we make that come true?

CINDY COHN: Yeah, I think that's both the superpower and in some ways the thing that the Internet Archive struggles with, which is because when your infrastructure, people don't think about you and they don't wanna think about you, so that when you come under attack, it's hard to get people to see what they might be losing.
And I think one of the things that, you know, one of the reasons I wanted you to come on here and talk about the archive is I think we need to start making some of that invisible stuff visible because it's not magic. It's not automatic. It takes, you know, I mean, your personal courage in standing up is wonderful, but there need to be hundreds and thousands and hundreds of thousands saying, you know, this is our library, this is our future.
This is, you know, this is important and, and we need to stand up and hopefully if we stand up enough, you know, we don't have to do it every four years or so. But you know, the number of people who I sent to the Wayback Machine when they were very, very worried about US government information going down and, and pointed out, look, you know, the archive's been quietly doing this for, you know, nearly 20 years now, is a lot. And that's because again, you're kind of quietly doing the important work.
And so, you know, my hope is that ,with this podcast and otherwise, we get a little more attention so that we can really build this better future and, and maybe in the better future, we don't have to think about it again. But right now there's a lot of different kinds of attacks.

BREWSTER KAHLE: It's a challenging time, especially in the United States for libraries. There's the book bannings, defunding. Probably structurally the worst thing is the licensing model. The idea that there's no digital ownership. I mean, just like really bad behavior on the part of the corporations. Um, so, but Internet Archive Canada is doing well. Internet Archive Europe is coming back up and serving interesting roles with public AI to go and do publicly oriented values driven AI technology, which is kind of great. We'd like to see internet archives planted in lots of places. The idea that we can just depend on the United States jurisdictions for being the information resource for the world I think that train is gone.
So let's go and build a robust infrastructure. It's kinda like what we saw out the internet. Can we build internet archives all over the world? And that takes not only money, but actually the money part is probably not the hardest part. It's people interested in dedicating their lives to open – to open source software, free and open source software, open access materials, the infrastructure to step out and work in non-profits as opposed to some of the, you know, the very tempting, um, stock option deals that come from these these VC-funded whatevers, um, and work and do the good work that they can point to and they can be proud of for the rest of their lives.

CINDY COHN: Yeah. And there is something so important about that, about getting to wake up every day and feel like you're making the world better. And I think your particular story about this, because you know, you made money early on, you did some companies and you decided to dig back into the public side of the work rather than, you know, stepping back and becoming a VC or, you know, buying your third island, or those kinds of things.
And I think that one of the things that's important is that I feel like there's a lot of people who don't think that you can be a technologist and a successful person without being an asshole. And, you know, I think you're a good counter example of somebody who is deeply technical, who thinks about things in a, you know, how do we build better infrastructure, who understands how all of these systems work. And use that information to build good, rather than, you know, necessarily deciding that the, you know, the best thing to do is to maybe take over a local government and build a small fiefdom to yourself.

BREWSTER KAHLE: Well, thank you for that. And yes, for-profit entities are gasoline. They're explosive and they don't tend to last long. But I think one of the best ideas the United States has come up with is the 501 C3 public charity, which is not the complete antidote to the C corporations that were also put across by the United States since World War II in ways that shouldn't have been, but the 501 C3 public charities are interesting. They tend to last longer. They take away the incentive to sell out, yet leave an ability to be an operational entity. You just have to do public good. You have to actually live and walk the walk and go and do that. But I think it's a fabulous structure. I mean, you, Cindy, how old is the EFF now?

CINDY COHN: 35. This is our 35th anniversary.

BREWSTER KAHLE: That's excellent. And the Internet Archive is like 28, 29 years old, and that's a long time for commercial, excuse me, for commercial entities or tech! Things in the tech world, they tend to turn over. So if you wanna build something long term, and you're willing to only do, as Lessig would put it, some rights reserved, or some profit motive reserved. Then the 501 C3 public charities, what other countries are adopting, this model is a mechanism of building infrastructure that can last a long time where you get your alignment with the public interest.

CINDY COHN: Yeah, I think that's right. And it's been interesting to me for the, you know, being in this space for a really long time, the nonprofit salaries, the nonprofit may not be as high, but the jobs are more stable. Like we don't have in our sector the waves of layoffs. I mean, occasionally for sure we're, you know, that that is a thing that happens in the nonprofit digital rights sector. But I would say compared to the for-profit world, there’s a much more stable structure, um, because you don't have this gasoline idea, these kind of highs and lows and ups and downs. And that could be, you know, there's nothing wrong with riding that wave and making some money. But the question becomes, well, what do you do after that? Do you take that path to begin with? Or do you take that path later, when you've got some assets, you know, some people come outta school with loans and things like that.

BREWSTER KAHLE: So we need this intermediary between the academic, the dot edu, and the dot com, and I think the dot org is such a thing. And also there was a time when we did a lot in dot gov of bringing civic tech. And civic tech in Canada is up and running and wonderful. So there's things that we can do in that.
We can also spread these ideas into other sectors like banking. How about some nonprofit banks, please? Why don't we have some nonprofit housing that actually supports nonprofit workers? We're doing an experiment with that to try to help support people that want to work in San Francisco for nonprofits and not feel that they have to commute from hours away.
So can we go and take some of these ideas pioneered by Richard Stallman, Larry Lessig, Vince Sur, the Cindy Cohns, and go and try it in new sectors? You're doing a law firm, one of the best of the Silicon Valley law firms, and you give away your product. Internet Archive gives away its product. Wikipedia gives away its product. This is, like, not supposed to happen, but it works really well. And it requires support and interest of people to work there and also to support it from the outside. But it functions so much better. It's less friction. It's easier for us to work with non other non-profits than it is to work with for-profits.

JASON KELLEY: Well I'm glad that you brought up the nonprofit points and really dug into it because earlier, Brewster, you mentioned the people listening to this are, you know, the reason you were able to fight back against the NSL letters is that EFF has supporters that keep it going, and those same supporters, the people listening to this are hopefully, and probably, the ones that help keep the Archive going. And I just wanted to make sure people know that the Archive is also supported by donors. And, uh, if people like it, they, they, there's nothing wrong with supporting both EFF and the Archive, and I hope everyone does both.

CINDY COHN: Yeah. There's a whole community. And one of the things that Brewster has really been a leader in is seeing and making space for us to think of ourselves as a community. Because we're stronger together. And I think that's another piece of the somewhat quiet work that Brewster and the Archive do is knitting together the open world into thinking of itself as an open world and, able to move together and leverage each other.

BREWSTER KAHLE: Well thank you for all the infrastructure EFF provides. And if anybody's in San Francisco, come over on a Friday afternoon at ! And we give it to her! If I'm here, I give it to her and try to help answer questions. We even have ice cream. And so the idea is to go and invite people into this other alternative form of success that maybe they weren't taught about in business school or, or, or, uh, you know, they want to go off and do something else.
That's fine, but at least understand a little bit of how the underlying structures of the internet, whether it's some of the original plumbing, um, some of these visions of Wikipedia, Internet Archive. How do we make all of this work? And it's by working together, trusting each other to try to do things right, even when the technology allows you to do things that are abusive. Stepping back from that and building, uh, the safeguards into the technology eventually, and celebrate what we can get done to support a better civic infrastructure.

CINDY COHN: That is the perfect place to end it. Thank you so much, Brewster, for coming on and bringing your inspiration to us.

JASON KELLEY: I loved that we wrapped up the season with Brewster because really there isn't anything more important, in a lot of ways, to freedom than a library. And the tool of freedom that Brewster built, the Internet Archive and all of the different pieces of it, is something that I think is so critical to how people think about the internet and what it can do, and honestly, it's taken for granted. I think once you start hearing Brewster talk about it, you realize just how important it is. I just love hearing from the person who thought of it and built it.

CINDY COHN: Yeah, he's so modest. The “I only had one idea,” right? Or two ideas, you know, one is privacy and the other is a universal access to all the world's information. You know, just some little things.

JASON KELLEY: Just a few things that he built into practice.

CINDY COHN: Well, and you know, he and a lot of other people, I think he's the first to point out that this is a sector that there's a lot of people working in this area and it's important that we think about it that way.
It does take the long view to build things that will last. And then I think he also really talked about the nonprofit sector and how, you know, that space is really important. And I liked his framing of it being kind of in between the dot edu, the academics and the dot com, that the dot orgs play this important role in bringing the public into the conversation about tech, and that's certainly what he's done.

JASON KELLEY: I loved how much of a positive pitch this was for nonprofits, and I think a lot of people think of charities they don't think about EFF necessarily, or the Internet Archive, but this tech sector of nonprofits is, you know, that community you talked about all working together to sort of build this structure that protects people's rights online and also gives them access to these incredible tools and projects and resources and, you know, everyone listening to this is probably a part of that community in one way or another. It's much bigger than I think people realize.

CINDY COHN: Yeah. And whether you're contributing code or doing lawyering or doing activism, you know, there's, there's spaces throughout, and those are only just three that we do.
But the other piece, and, and you know, I was very of course honored that he told the story about national security letters, but, you know, we can support each other. Right. That when somebody in this community comes under attack, that's where EFF often shows up. But when, you know, he said people have ideas and they wanna be able to develop them, you know, the archive provides the infrastructure. All of this stuff is really important and important to lean into in this time when we're really seeing a lot of public institutions and nonprofit institutions coming under attack.
What I really love about this season, Jason, is the way we've been able to shine our little spotlight on a bunch of different pieces of the sector. And there's so many more. You know, as somebody who started in this digital world in the nineties when, you know, I could present all of the case law about the internet on one piece of paper in a 20 minute presentation.
You know, watching this grow out and seeing that it's just the beginning has been really, it's been really fun to be able to talk to all of these pieces. And you know, to me the good news is that, that people, you know, sometimes their stories get presented as if they're alone or if there's this lone, you know, it's kind of a superhero narrative. There's this lone Brewster Kahle who's out there doing things, and now of course that's true. Brewster's, you know, again, Brewster's somebody who I readily point to when people need an example of somebody who, who did really well in tech but didn't completely become a money grubbing jerk as a result of it, but instead, you know, plowed it back into the community. It's important to have people like that, but it's also important to recognize that this is a community and that we're building it, and that it’s got plenty of space for the next person to show up and, and throw in ideas.
At least I hope that's how, you know, we fix the internet.

JASON KELLEY:  And that's it for this episode and for this season. Thank you to Brewster for the conversation today, and to all of our guests this season for taking the time to share their insight, experience, and wisdom with us these past few months. Everybody who listens, gets to learn a little bit more about how to fix the internet.
That is our goal at EFF. And every time I finish one of these conversations, I think, wow, there's a lot to do. So thank you so much for listening. If you wanna help us do that work, go to eff.org/pod and you can donate, become a member, and um, we have 30,000 members, but we could always use a few more because there is a lot to fix.
Thank you so much. Our theme music is by Nat Keefe of Beat Millware with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Josh Richman

Executive Director Cindy Cohn Will Step Down After 25 Years with EFF

5 days 2 hours ago
EFF Launches Search for Successor to ‘Visionary Lawyer and Leader’

SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will step down by mid-2026 after more than 25 years with the organization and a decade as its top officer leading the fight for digital freedoms. 

EFF – defending digital privacy, free speech, and innovation since 1990 – is launching a search for Cohn’s successor. 

“It’s been the honor of my life to help EFF grow and become the strong, effective organization it is today, but it’s time to make space for new leadership. I also want to get back into the fight for civil liberties more directly than I can as the executive director of a thriving 125-person organization,” Cohn said. “I’m incredibly proud of all that we’ve built and accomplished. One of our former interns once called EFF the joyful warriors for internet freedom and I have always loved that characterization.” 

“I know EFF’s lawyers, activists and technologists will continue standing up for freedom, justice and innovation whether we’re fighting trolls, bullies, corporate oligarchs, clueless legislators or outright dictators,” she added. 

"Cindy Cohn has been a relentless advocate for the simple proposition that regular people have a fundamental right to privacy online,” said U.S. Sen. Ron Wyden, D-OR. “Her work – defending encryption, opposing warrantless NSA surveillance, and suing major corporations for violating customer privacy – has consistently put her on the side of users and individuals and against powerful entrenched interests. Cindy's steady leadership at EFF will be missed by everyone who believes the First and Fourth Amendments are just as necessary today as they were more than 200 years ago." 

Cohn, 61, first became involved with EFF in 1993, when EFF asked her to serve as the outside lead attorney in Bernstein v. Dept. of Justice, the successful First Amendment challenge to the U.S. export restrictions on cryptography. She served as EFF’s Legal Director as well as its General Counsel from 2000 through 2015, and she has served as Executive Director since then. She also has co-hosted EFF’s award-winning “How to Fix the Internet” podcast, which is about to conclude its sixth season. Her upcoming professional memoir covering her time at EFF, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance, will be published in spring 2026 by MIT Press.  

Cohn was named to TheNonProfitTimes 2020 Power & Influence TOP 50. In 2018, Forbes included her as one of America's Top 50 Women in Tech. The National Law Journal named her one of the 100 most influential lawyers in America in 2013, noting: "[I]f Big Brother is watching, he better look out for Cindy Cohn." That publication also named her in 2006 for "rushing to the barricades wherever freedom and civil liberties are at stake online." In 2007, the National Law Journal named her one of the 50 most influential women lawyers in America.  

In 2010 the Intellectual Property Section of the State Bar of California awarded Cohn its Intellectual Property Vanguard Award and in 2012 the Northern California Chapter of the Society of Professional Journalists awarded her its James Madison Freedom of Information Award

Cohn said she made the decision to step down more than a year ago, and later informed EFF’s Board of Directors and executive staff. The Board of Directors has assembled a search committee, which in turn has engaged leadership advisory firm Russell Reynolds Associates to conduct a search for EFF’s new executive director. Inquiries about the search can be directed to EFF@russellreynolds.com.  

The search committee hopes to hire someone next spring, with Cohn planning to remain at EFF for a transition period through early summer.  

“Simply put, Cindy Cohn is an EFF institution,” said Gigi Sohn, chair of EFF’s Board of Directors. “Under her leadership, the organization has grown tremendously, cementing its role as the premier defender of digital privacy, free speech and innovation in the U.S., and perhaps the world. The EFF Board thanks Cindy for her many years of service to EFF, first as Legal Director and for the past 10 years as Executive Director, as well as her willingness to help the organization through this leadership transition. We wish her all the best in her future endeavors, which undoubtedly will be equally as, if not more, successful.” 

“Cindy has been a huge part of EFF’s 35-year history and growth, and the organization simply wouldn’t be where it is today - at the forefront of defending civil liberties in the digital world - without her,” said EFF co-founder Mitch Kapor. “Her strong, compassionate leadership has set a clear and impactful road map for EFF’s work for years to come.” 

“Cindy Cohn is a visionary lawyer and leader who has helped make EFF the world’s foremost digital rights organization,” said American Civil Liberties Union Deputy Legal Director Ben Wizner. “She has also been a dear friend and mentor to so many of us, leading with her warmth and humor as much as her brilliance. I’m excited to see her next act and confident she’ll find new strategies for protecting our rights and liberties.” 

“Cindy is a force in the digital rights community,” said Center for Democracy & Technology President and CEO Alexandra Reeve Givens. “Her visionary leadership has pushed the field forward, championing the rights of individual users and innovators in a fast-changing digital world. Cindy is a tireless advocate for user privacy, free expression, and ensuring technology serves the public good. Her legacy at EFF stands not just in the policy battles and complex cases she’s won, but in the foundation she has built for the next generation of digital rights defenders.” 

For more about Cindy Cohn, with hi-res photo: https://www.eff.org/about/staff/cindy-cohn 

Contact:  JoshRichmanCommunications Directorjrichman@eff.org
Josh Richman

EFF Awards Spotlight ✨ Software Freedom Law Center, India

1 week 2 days ago

In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!

All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!

REGISTER TODAY!

GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. And last, but certainly not least—Software Freedom Law Center, India, winner of the EFF Award for Defending Digital Freedoms:

Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

We're excited to celebrate SFLC.IN and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

Christian Romero

Age Verification Is A Windfall for Big Tech—And A Death Sentence For Smaller Platforms

1 week 2 days ago

Update September 10, 2025: Bluesky announced today that it would implement age verification measures in South Dakota and Wyoming to comply with laws there. Bluesky continues to block access in Mississippi.

If you live in Mississippi, you may have noticed that you are no longer able to log into your Bluesky or Dreamwidth accounts from within the state. That’s because, in a chilling early warning sign for the U.S., both social platforms decided to block all users in Mississippi from their services rather than risk hefty fines under the state’s oppressive age verification mandate. 

If this sounds like censorship to you, you’re right—it is. But it’s not these small platforms’ fault. This is the unfortunate result of Mississippi’s wide-sweeping age verification law, H.B. 1126. Though the law had previously been blocked by a federal district court, the Supreme Court lifted that injunction last month, even as one justice (Kavanaugh) concluded that the law is “likely unconstitutional.” This allows H.B. 1126 to go into effect while the broader constitutional challenge works its way through the courts. EFF has opposed H.B. 1126 from the start, arguing consistently and constantly that it violates all internet users’ First Amendment rights, seriously risks our privacy, and forces platforms to implement invasive surveillance systems that ruin our anonymity

Lawmakers often sell age-verification mandates as a silver bullet for Big Tech’s harms, but in practice, these laws do nothing to rein in the tech giants. Instead, they end up crushing smaller platforms that can’t absorb the exorbitant costs. Now that Mississippi’s mandate has gone into effect, the reality is clear: age verification laws entrench Big Tech’s dominance, while pushing smaller communities like Bluesky and Dreamwidth offline altogether. 

Sorry Mississippians, We Can’t Afford You

Bluesky was the first platform to make the announcement. In a public blogpost, Bluesky condemned H.B. 1126’s broad scope, barriers to innovation, and privacy implications, explaining that the law forces platforms to “make every Mississippi Bluesky user hand over sensitive personal information and undergo age checks to access the site—or risk massive fines.” As Bluesky noted, “This dynamic entrenches existing big tech platforms while stifling the innovation and competition that benefits users.” Instead, Bluesky made the decision to cut off Mississippians entirely until the courts consider whether to overturn the law. 

About a week later, we saw a similar announcement from Dreamwidth, an open-source online community similar to LiveJournal where users share creative writing, fanfiction, journals, and other works. In its post, Dreamwidth shared that it too would have to resort to blocking the IP addresses of all users in Mississippi because it could not afford the hefty fines. 

Dreamwidth wrote: “Even a single $10,000 fine would be rough for us, but the per-user, per-incident nature of the actual fine structure is an existential threat.” The service also expressed fear that being involved in the lawsuit against Mississippi left it particularly vulnerable to retaliation—a clear illustration of the chilling effect of these laws. For Dreamwidth, blocking Mississippi users entirely was the only way to survive. 

Age Verification Mandates Don’t Rein In Big Tech—They Entrench It

Proponents of age verification claim that these mandates will hold Big Tech companies accountable for their outsized influence, but really the opposite is true. As we can see from Mississippi, age verification mandates concentrate and consolidate power in the hands of the largest companies—the only entities with the resources to build costly compliance systems and absorb potentially massive fines. While megacorporations like Google (with YouTube) and Meta (with Instagram) are already experimenting with creepy new age-estimation tech on their social platforms, smaller sites like Bluesky and Dreamwidth simply cannot afford the risks. 

We’ve already seen how this plays out in the UK. When the Online Safety Act came into force recently, platforms like Reddit, YouTube, and Spotify implemented broad (and extremely clunky) age verification measures while smaller sites, including forums on parenting, green living, and gaming on Linux, were forced to shutter. Take, for example, the Hamster Forum, “home of all things hamstery,” which announced in March 2025 that the OSA would force it to shut down its community message boards. Instead, users were directed to migrate over to Instagram with this wistful disclaimer: “It will not be the same by any means, but . . . We can follow each other and message on there and see each others [sic] individual posts and share our hammy photos and updates still.” 

When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants.

This perfectly illustrates the market impact of online age verification laws. When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants. These huge companies—those that can afford expensive age verification systems and aren’t afraid of a few $10,000 fines while they figure out compliance—will end up getting more business, more traffic, and more power to censor users and violate their privacy. 

This consolidation of power is a dream come true for the Big Tech platforms, but it’s a nightmare for users. While the megacorporations get more traffic and a whole lot more user data (read: profit), users are left with far fewer community options and a bland, corporate surveillance machine instead of a vibrant public sphere. The internet we all fell in love with is a diverse and colorful place, full of innovation, connection, and unique opportunities for self-expression. That internet—our internet—is worth defending.

TAKE ACTION

Don't let congress censor the internet

Molly Buckley

EFF Joins 55 Civil Society Organizations Urging the End of Sanctions on UN Special Rapporteur Francesca Albanese

1 week 2 days ago

Following the U.S. government's overreaching decision to impose sanctions against Francesca Albanese, the United Nations Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967, EFF joined more than 50 civil society organizations in calling for the U.S. to lift the sanctions. 

The U.S.’s sanctions on Francesca Albanese were formally issued in July 2025, pursuant to Section 1(a)(ii)(A) of President Trump’s Executive Order 14203, which was imposed by the U.S. on the International Criminal Court (ICC) in February for having “engaged in illegitimate and baseless actions targeting America and our close ally Israel.” Under this Executive Order, the State Department is instructed to name specific people who have worked with or for the ICC.  Rapporteur Albanese joins several ICC judges and the lead prosecutor in having their U.S. property and interests in property blocked, as well as restrictions on entering the country, banking, and more. 

One of the reasons cited in the far-reaching U.S. sanction is Albanese’s engagement with the ICC to investigate or prosecute nationals of the U.S. and Israel. The sanction came just days after the publication of the Special Rapportuer’s recent report to the UN Human Rights Council, “From economy of occupation to economy of genocide.” In her report, the Special Rapporteur “urges the International Criminal Court and national judiciaries to investigate and prosecute corporate executives and/or corporate entities for their part in the commission of international crimes and laundering of the proceeds from those crimes.” 

As a UN Special Rapporteur, Albanese’s role is to conduct independent research, gather information, and prepare reports on human rights situations, including documenting violations and providing recommendations to the Human Rights Council and other Human Rights bodies. Special Rapporteurs are independent experts chosen by the UN Human Rights Council in Geneva. They do not represent the UN or hold any formal authority, but their reports and findings are essential for advocacy in transnational situations, informing prosecutors at the International Criminal Court, or pressuring counties for human rights abuses. 

The unilateral sanctions imposed on the UN Special Rapporteur not only target her as an individual but also threaten the broader international human rights framework, undermining crucial work in monitoring and reporting on human rights issues. Such measures risk politicizing their mandates, discouraging frank reporting, and creating a chilling effect on human rights defenders more broadly. With the 80th session of the UN General Assembly opening in New York this September, these sanctions and travel restrictions present an amplified impingement on the Special Rapporteur’s capacity to fulfill her mandate and report on human rights abuses in Palestine.

The Special Rapportuer’s report identifies how AI, cloud services, biometric surveillance, and predictive policing technologies have reinforced military operations, population control and the unlawful targeting of civilians in the ongoing genocide in Gaza. More specifically, it illuminates the role of U.S. tech giants like Microsoft, Alphabet (Google’s parent company), Amazon, and IBM in providing dual-use infrastructure to “integrate mass data collection and surveillance, while profiting from the unique testing ground for military technology offered by the occupied Palestinian territory.”  

This report is well within her legal mandate to investigate and report on human rights issues in Palestine and provide critical oversight and accountability for human rights abuses. This work is particularly essential at a time when the very survival of Palestinians in the occupied Gaza Strip is at stake—journalists are being killed with deplorable frequency; internet shutdowns and biased censorship by social media platforms are preventing vital information from circulating within and leaving Gaza; and U.S.-based tech companies are continuing to be opaque about their role in providing technologies to the Israeli authorities for use in the ongoing genocide against Palestinians, despite the mounting evidence

EFF has repeatedly called for greater transparency relating to the role of Big Tech companies like Google, Amazon, and Microsoft in human rights abuses across Gaza and the West Bank, with these U.S.-based companies coming under pressure to reveal more about the services they provide and the nature of their relationships with the Israeli forces engaging in the military response. Without greater transparency, the public cannot tell whether these companies are complying with human rights standards—both those set by the United Nations and those they have publicly set for themselves. We know that this conflict has resulted in alleged war crimes and has involved massive, ongoing surveillance of civilians and refugees living under what international law recognizes as an illegal occupation. That kind of surveillance requires significant technical support and it seems unlikely that it could occur without any ongoing involvement by the companies providing the platforms. 

Top UN human rights officials have called for the reversal of the sanctions against the Special Rapporteur, voicing serious concerns about the dangerous precedent this sets in undermining human rights. The UN High Commissioner for Human Rights, Volker Türk, called for a prompt reversal of the sanctions and noted that, “even in the face of fierce disagreement, UN member states should engage substantively and constructively, rather than resort to punitive measures.” Similarly, UN Spokesperson Stéphane Dujarric noted that whilst Member States “are perfectly entitled to their views and to disagree with” experts’ reports, they should still “engage with the UN’s human rights architecture.”

In a press conference, Albanese said she believed that the sanctions were calculated to weaken her mission, and questioned why they had even been introduced: “for having exposed a genocide? For having denounced the system? They never challenged me on the facts.”

The United States must reverse these sanctions, and respect human rights for all—not just for the people they consider worthy of having them.

Read our full civil society letter here.

Electronic Frontier Foundation

California Lawmakers: Support S.B. 524 to Rein in AI Written Police Reports

1 week 3 days ago

EFF urges California state lawmakers to pass S.B. 524, authored by Sen. Jesse Arreguín. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. 

This bill does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, a popular AI police report writing tool, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent. 

This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. 

Draft One takes audio from an officer’s body-worn camera, and uses AI  to turn that dialogue into a narrative police report. Because independent researchers have been unable to test it, there are important questions about how the system handles things like sarcasm, out of context comments, or interactions with members of the public that speak languages other than English. Another major concern is Draft One’s inability to keep track of which parts of a report were written by people and which parts were written by AI. By design, their product does not retain different iterations of the draft—making it easy for an officer to say, “I didn’t lie in my police report, the AI wrote that part.” 

All lawmakers should pass regulations of AI written police reports. This technology could be nearly everywhere, and soon. Axon is a top supplier of body-worn cameras in the United States, which means they have a massive ready-made customer base. Through the bundling of products, AI-written police reports could be at a vast percentage of police departments. 

AI-written police reports are unproven in terms of their accuracy, and their overall effects on the criminal justice system. Vendors still have a long way to go to prove this technology can be transparent and auditable. While it would not solve all of the many problems of AI encroaching on the criminal justice system, S.B. 524 is a good first step to rein in an unaccountable piece of technology. 

We urge California lawmakers to pass S.B. 524. 

Matthew Guariglia

EFF Awards Spotlight ✨ Erie Meyer

1 week 3 days ago

In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!

All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!

REGISTER TODAY!

GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. This time—Erie Meyer, winner of the EFF Award for Protecting Americans' Data:

Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

We're excited to celebrate Erie Meyer and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

Christian Romero

From Libraries to Schools: Why Organizations Should Install Privacy Badger

1 week 3 days ago

​​In an era of pervasive online surveillance, organizations have an important role to play in protecting their communities’ privacy. Millions of people browse the web on computers provided by their schools, libraries, and employers. By default, popular browsers on these computers leave people exposed to hidden trackers.

Organizations can enhance privacy and security on their devices by installing Privacy Badger, EFF’s free, open source browser extension that automatically blocks trackers. Privacy Badger is already used by millions to fight online surveillance and take back control of their data.

Why Should Organizations Install Privacy Badger on Managed Devices? Protect People from Online Surveillance

Most websites contain hidden trackers that let advertisers, data brokers, and Big Tech companies monitor people’s browsing activity. This surveillance has serious consequences: it fuels scams, government spying, predatory advertising, and surveillance pricing

By installing Privacy Badger on managed devices, organizations can protect entire communities from these harms. Most people don’t realize the risks of browsing the web unprotected. Organizations can step in to make online privacy available to everyone, not just the people who know they need it. 

Ad Blocking is a Cybersecurity Best Practice

Privacy Badger helps reduce cybersecurity threats by blocking ads that track you (unfortunately, that’s most ads these days). Targeted ads aren’t just a privacy nightmare. They can also be a vehicle for malware and phishing attacks. Cybercriminals have tricked legitimate ad networks into distributing malware, a tactic known as malvertising.

The risks are serious enough that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recommends federal agencies deploy ad-blocking software. The NSA, CIA, and other intelligence agencies already follow this guidance. These agencies are using advertising systems to surveil others, but blocking ads for their own employees. 

All organizations, not just spy agencies, should make ad blocking part of their security strategy.

A Tracker Blocker You Can Trust

Four million users already trust Privacy Badger, which has been recommended by The New York Times' Wirecutter, Consumer Reports, and The Washington Post.

Trust is crucial when choosing an ad-blocking or tracker-blocking extension because they require high levels of browser permissions. Unfortunately, not all extensions deserve that trust. Avast’s “privacy” extension was caught collecting and selling users’ browsing data to third parties—the very practice it claimed to prevent.

Privacy Badger is different. EFF released it over a decade ago, and the extension has been open-source—meaning other developers and researchers can inspect its code—that entire time. Built by a nonprofit with a 35-year history fighting for user rights, organizations can trust that Privacy Badger works for its users, not for profit. 

Which Organizations Should Deploy Privacy Badger?

All of them! Installing Privacy Badger on managed devices improves privacy and security across an organization. That said, Privacy Badger is most beneficial for two types of organizations: libraries and schools. Both can better serve their communities by safeguarding the computers they provide.

Libraries

The American Library Association (ALA) already recommends installing Privacy Badger on public computers to block third-party tracking. Librarians have a long history of defending privacy. The ALA’s guidance is a natural extension of that legacy for the digital age. While librarians protect the privacy of books people check out, Privacy Badger protects the privacy of websites they visit on library computers. 

Millions of Americans depend on libraries for internet access. That makes libraries uniquely positioned to promote equitable access to private browsing. With Privacy Badger, libraries can ensure that safe and private browsing is the default for anyone using their computers. 

Libraries also play a key role in promoting safe internet use through their digital literacy trainings. By including Privacy Badger in these trainings, librarians can teach patrons about a simple, free tool that protects their privacy and security online.

Schools

Schools should protect their students’ from online surveillance by installing Privacy Badger on computers they provide. Parents are rightfully worried about their children’s privacy online, with a Pew survey showing 85% worry about advertisers using data about what kids do online to target ads. Deploying Privacy Badger is a concrete step schools can take to address these concerns. 

By blocking online trackers, schools can protect students from manipulative ads and limit the personal data fueling social media algorithms. Privacy Badger can even block tracking in Ed Tech products that schools require students to use. Alarmingly, a Human Rights Watch analysis of Ed Tech products found that 89% shared children’s personal data with advertisers or other companies.

Instead of deploying invasive student monitoring tools, schools should keep students safe by keeping their data safe. Students deserve to learn without being tracked, profiled, and targeted online. Privacy Badger can help make that happen.

How Can Organizations Deploy Privacy Badger On Managed Devices?

System administrators can deploy and configure Privacy Badger on managed devices by setting up an enterprise policy. Chrome, Firefox, and Edge provide instructions for automatically installing extensions organization-wide. You’ll be able to configure certain Privacy Badger settings for all devices. For example, you can specify websites where Privacy Badger is disabled or prevent Privacy Badger’s welcome page from popping up on computers that get reset after every session. 

We recommend educating users about the addition of Privacy Badger and what it does. Since some websites deeply embed tracking, privacy protections can occasionally break website functionality. For example, a video might not play or a comments section might not appear. If this happens, users should know that they can easily turn off Privacy Badger on any website. Just open the Privacy Badger popup and click “Disable for this site.” 

Don't hesitate to reach out if you're interested in deploying Privacy Badger at scale. Our team is here to help you protect your community's privacy. And if you're already deploying Privacy Badger across your organization, we'd love to hear how it’s going

Make Private Browsing the Default at Your Organization

Schools, libraries, and other organizations can make private browsing the norm by deploying Privacy Badger on devices they manage. If you work at an organization with managed devices, talk to your IT team about Privacy Badger. You can help strengthen the security and privacy of your entire organization while joining the fight against online surveillance.

Lena Cohen

Verifying Trust in Digital ID Is Still Incomplete

1 week 3 days ago

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the second in a short series that explains digital ID and the pending use case of age verification. Upcoming posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

Digital identity encompasses various aspects of an individual's identity that are presented and verified through either the internet or in person. This could mean a digital credential issued by a certification body or a mobile driver’s license provisioned to someone’s mobile wallet. They can be presented in plain text on a device, as a scannable QR code, or through tapping your device to something called a Near Field Communication (NFC) reader. There are other ways to present credential information that is a little more privacy preserving, but in practice those three methods are how we are seeing digital ID being used today.

Advocates of digital ID often use a framework they call the "Triangle of Trust." This is usually presented as a triangle of exchange between the holder of an ID—those who use a phone or wallet application to access a service; the issuer of an ID—this is normally a government entity, like the state Departments of Motor Vehicles in the U.S, or a banking system; and the verifier of an ID—the entity that wants to confirm your identity, such as law enforcement, a university, a government benefits office, a porn site, or an online retailer.

This triangle implies that the issuer and verifier—for example, the government who provides the ID and the website checking your age—never need to talk to one another. This theoretically avoids the tracking and surveillance threats that arise by preventing your ID, by design, from phoning home every time you verify your ID with another party.

But it also makes a lot of questionable assumptions, such as:

1) the verifier will only ever ask for a limited amount of information. 

2) the verifier won’t store information it collects.

3) the verifier is always trustworthy. 

The third assumption is especially problematic. How do you trust that the verifier will protect your most personal information and not use, store, or sell it beyond what you have consented to? Any of the following could be verifiers:

  • Law enforcement when doing a traffic stop and verifying your ID as valid.
  • A government benefits office that requires ID verification to sign up for social security benefits.
  • A porn site in a state or country which requires age verification or identity verification before allowing access.
  • An online retailer selling products like alcohol or tobacco.

Looking at the triangle again, this isn’t quite an equal exchange. Your personal ID like a driver’s license or government ID is both one of the most centralized and sensitive documents you have—you can’t control how it is issued or create your own, having to go through your government to obtain one. This relationship will always be imbalanced. But we have to make sure digital ID does not exacerbate these imbalances.

The effort to answer the questions of how to prevent verifier abuse is ongoing. But instead of working on the harms that these systems cause, the push for this technology is being fast-tracked by governments around the world scrambling to solve what they see as a crisis of online harms by mandating age verification. And current implementations of the Triangle of Trust have already proven disastrous.

One key example of the speed of implementation outpacing proper protections is the Digital Credential API. Initially launched by Google and now supported by Apple, this rollout allows for mass, unfettered verification by apps and websites to use the API to request information from your digital ID. The introduction of this technology to people’s devices came with no limits or checks on what information verifiers can seek—incentivizing verifiers to over-ask for ID information beyond the question of whether a holder is over a certain age, simply because they can. 

Digital Credential API also incentivizes for a variety of websites to ask for ID information that aren’t required and did not commonly do so previously. For example, food delivery services, medical services, and gaming sites, and literally anyone else interested in being a verifier, may become one tomorrow with digital ID and the Digital Credential API. This is both an erosion of personal privacy, as well as a pathway into further surveillance. There must be established limitations and scope, including:

  • verifiers establishing who they are and what they plan to ask from holders. There should also be an established plan for transparency on verifiers and their data retention policies.
  • ways to identify and report abusive verifiers, as well as real consequences, like revoking or blocking a verifier from requesting IDs in the future.
  • unlinkable presentations that do not allow for verifier and issuer collusion. As well as no data shared between verifiers you attest to. Preventing tracking of your movements in person or online every time you attest your age.

A further point of concern arises in cases of abuse or deception. A malicious verifier can send a request with no limiting mechanisms or checks and the user who rejects the request could be  fully blocked from the website or application. There must be provisions that ensure people have access to vital services that will require age verification from visitors.

Government's efforts to tackle verifiers potentially abusing digital ID requests haven’t come to fruition yet. For example, the EU Commission recently launched its age verification “mini app” ahead of the EU ID wallet for 2026. The mini app will not have a registry for verifiers, as EU regulators had promised and then withdrew. Without verifier accountability, the wallet cannot tell if a request is legitimate. As a result, verifiers and issuers will demand verification from the people who want to use online services, but those same people are unable to insist on verification and accountability from the other sides of the triangle. 

While digital ID gets pushed as the solution to the problem of uploading IDs to each site users access, the security and privacy on them varies based on implementation. But when privacy is involved, regulators must make room for negotiation. There should be more thoughtful and protective measures for holders interacting with more and more potential verifiers over time. Otherwise digital ID solutions will just exacerbate existing harms and inequalities, rather than improving internet accessibility and information access for all.

Alexis Hancock

EFF Statement on ICE Use of Paragon Solutions Malware

1 week 4 days ago

This statement can be attributed to EFF Senior Staff Technologist Cooper Quintin

It was recently reported by Jack Poulson on Substack that ICE has reactivated its 2 million dollar contract with Paragon Solutions, a cyber-mercenary and spyware manufacturer. 

The reactivation of the contract between the Department of Homeland Security and Paragon Solutions, a known spyware vendor, is extremely troubling.

This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses

Paragon's “Graphite” malware has been implicated in widespread misuse by the Italian government. Researchers at Citizen Lab at the Munk School of Global Affairs at the University of Toronto and with Meta found that it has been used in Italy to spy on journalists and civil society actors, including humanitarian workers. Without strong legal guardrails, there is a risk that the malware will be misused in a similar manner by the U.S. Government.

These reports undermine Paragon Solutions’s public  marketing of itself as a more ethical provider of surveillance malware. 

Reportedly, the contract is being reactivated because the US arm of Paragon Solutions was acquired by a Miami based private equity firm, AE Industrial Partners, and then merged into a Virginia based cybersecurity company, REDLattice, allowing ICE to circumvent Executive Order 14093 which bans the acquisition of spyware controlled by a foreign government or person. Even though this order was always insufficient in preventing the acquisition of dangerous spyware, it was the best protection we had. This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses. Nor will it prevent insider threats at Paragon using their malware to spy on US government officials, or US government officials from misusing it to spy on their personal enemies, rivals, or spouses. 

The contract between Paragon and ICE requires all US users to adjust their threat models and take extra precautions. Paragon’s Graphite isn’t magical, it’s still just malware. It still needs a zero day exploit in order to compromise a phone with the latest security updates and those are expensive. The best thing you can do to protect yourself against Graphite is to keep your phone up to date and enable Lockdown Mode in your operating system if you are using an iPhone or Advanced Protection Mode on Android. Turning on disappearing messages is also helpful that way if someone in your network does get compromised you don’t also reveal your entire message history. For more tips on protecting yourself from malware check out our Surveillance Self Defense guides.

Related Cases: AlHathloul v. DarkMatter Group
Cooper Quintin

EFF Awards Spotlight ✨ Just Futures Law

1 week 4 days ago

In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!

All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!

REGISTER TODAY!

GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. First up—Just Futures Law, winner of the EFF Award for Leading Immigration and Surveillance Litigation:

Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States. In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.

We're excited to celebrate Just Futures Law and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

Christian Romero

🤐 This Censorship Law Turns Parents Into Content Cops | EFFector 37.11

1 week 4 days ago

School is back in session! Perfect timing to hit the books and catch up on the latest digital rights news. We've got you covered with bite-sized updates in this issue of our EFFector newsletter.

This time, we're breaking down why Wyoming’s new age verification law is a free speech disaster. You’ll also read about a big win for transparency around police surveillance, how the Trump administration’s war on “woke AI” threatens civil liberties, and a welcome decision in a landmark human rights case.

Prefer to listen? Be sure to check out the audio companion to EFFector! We're interviewing EFF staff about some of the important issues they are working on. This time, EFF Legislative Activist Rindala Alajaji discusses the real harms of age verification laws like the one passed in Wyoming. Tune in on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.11 - This Censorship Law Turns Parents Into Content Cops

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

What WhatsApp’s “Advanced Chat Privacy” Really Does

1 week 5 days ago

In April, WhatsApp launched its “Advanced Chat Privacy” feature, which, once enabled, disables using certain AI features in chats and prevents conversations from being exported. Since its launch, an inaccurate viral post has been ping-ponging around social networks, creating confusion around what exactly it does.

The viral post falsely claims that if you do not enable Advanced Chat Privacy, Meta’s AI tools will be able to access your private conversations. This isn’t true, and it misrepresents both how Meta AI works and what Advanced Chat Privacy is.

The confusion seems to spawn from the fact that Meta AI can be invoked through a number of methods, including in any group chat with the @Meta AI command. While the chat contents between you and other people are always end-to-end encrypted on the app, what you say to Meta AI is not. Similarly, if you or anyone else in the chat chooses to use Meta AI's “Summarize” feature, which uses Meta’s “Private Processing” technology, that feature routes the text of the chat through Meta’s servers. However, the company claims that they cannot view the content of those messages. This feature remains opt-in, so it's up to you to decide if you want to use it. The company also recently released the results of two audits detailing the issues that have been found thus far and what they’ve done to fix it.

For example, if you and your buddy are chatting, and your friend types in @Meta AI and asks it a question, that part of the conversion, which you can both see, is not end-to-end encrypted, and is usable for AI training or whatever other purposes are included in Meta’s privacy policy. But otherwise, chats remain end-to-end encrypted.

Advanced Chat Privacy offers some bit of control over this. The new privacy feature isn’t a universal setting in WhatsApp; you can enable or disable it on a per-chat basis, but it’s turned off by default. When enabled, Advanced Chat Privacy does three core things:

  • Blocks anyone in the chat from exporting the chats,
  • Disables auto-downloading media to chat participant’s phones, and
  • Disables some Meta AI features

Outside disabling some Meta AI features, Advanced Chat Privacy can be useful in other instances. For example, while someone can always screenshot chats, if you’re concerned about someone easily exporting an entire group chat history, Advanced Chat Privacy makes this harder to do because there’s no longer a one-tap option to do so. And since media can’t be automatically downloaded to someone’s phone (the “Save to Photos” option on the chat settings screen), it’s harder for an attachment to accidentally end up on someone’s device.

How to Enable Advanced Chat Privacy

Advanced Chat Privacy is enabled or disabled per chat. To enable it:

  • Tap the chat name at the top of the screen.
  • Select Advanced chat privacy, then tap the toggle to turn it on.

There are some quirks to how this works, though. For one, by default, anyone involved in a chat can turn Advanced Chat Privacy on or off at will, which limits its usefulness but at least helps ensure something doesn’t accidentally get sent to Meta AI.

There’s one way around this, which is for a group admin to lock down what users in the group can do. In an existing group chat that you are the administrator of, tap the chat name at the top of the screen, then:

  • Scroll down to Group Permissions.
  • Disable the option to “Edit Group Settings.” This makes it so only the administrator can change several important permissions, including Advanced Chat Privacy.

You can also set this permission when starting a new group chat. Just be sure to pop into the permissions page when prompted. Even without Advanced Chat Privacy, the “Edit Group Settings” option is an important one for privacy, because it also includes whether participants can change the length that disappearing messages can be viewed, so it’s something worth considering for every group chat you’re an administrator of, and something WhatsApp should require admins to choose before starting a new chat.

When it comes to one-on-one chats, there is currently no way to block the other person from changing the Advanced Chat Privacy feature, so you’ll have to come to an agreement with the other person on keeping it enabled if that’s what you want. If the setting is changed, you’ll see a notice in the chat stating so:

There are already serious concerns with how much metadata WhatsApp collects, and as the company introduces ads and AI, it’s going to get harder and harder to navigate the app, understand what each setting does, and properly protect the privacy of conversations. One of the reasons alternative encrypted chat options like Signal tend to thrive is because they keep things simple and employ strong default settings and clear permissions. WhatsApp should keep this in mind as it adds more and more features.

Thorin Klosowski

Open Austin: Reimagining Civic Engagement and Digital Equity in Texas

2 weeks 1 day ago

The Electronic Frontier Alliance is growing, and this year we’ve been honored to welcome Open Austin into the EFA. Open Austin began in 2009 as a meetup that successfully advocated for a city-run open data portal, and relaunched as a 501(c)3 in 2018 dedicated to reimagining civic engagement and digital equity by building volunteer open source projects for local social organizations.

As Central Texas’ oldest and largest grassroots civic tech organization, their work has provided hands-on training for over 1,500 members in the hard and soft skills needed to build digital society, not just scroll through it. Recently, I got the chance to speak with Liani Lye, Executive Director of Open Austin, about the organization, its work, and what lies ahead:

There’s so many exciting things happening with Open Austin. Can you tell us about your Civic Digital Lab and your Data Research Hub?

Open Austin's Civic Digital Lab reimagines civic engagement by training central Texans to build technology for the public good. We build freely, openly, and alongside a local community stakeholder to represent community needs. Our lab currently supports 5 products:

  • Data Research Hub: Answering residents' questions with detailed information about our city
  • Streamlining Austin Public Library’s “book a study room” UX and code
  • Mapping landlords’ and rental properties to support local tenant rights organizing
  • Promoting public transit by highlighting points of interest along bus routes
  • Creating an interactive exploration of police bodycam data

We’re actively scaling up our Data Research Hub, which started in January 2025 and was inspired by 9b Corp’s Neighborhood Explorer. Through community outreach, we gather residents’ questions about our region and connect the questions with Open Austin’s data analysts. Each answered question adds to a pool of knowledge that equips communities to address local issues. Crucially, the organizing team at EFF, through the EFA, have connected us to local organizations to generate these questions.

Can you discuss your new Civic Data Fellowship cohort and Communities of Civic Practice?

Launched in 2024, Open Austin’s Civic Data Fellowship trains the next generation of technologically savvy community leaders by pairing aspiring women, people of color, and LGBTQ+ data analysts with mentors to explore Austin’s challenges. These culminate in data projects and talks to advocates and policymakers, which double as powerful portfolio pieces.  While we weren’t able to fully fund fellowship stipends through grants this year, thanks to the generosity of our supporters, we successfully raised 25% through grassroots efforts.

Along with our fellowship and lab, we host monthly Communities of Civic Practice peer-learning circles that build skills for employability and practical civic engagement. Recent sessions include a speaker on service design in healthcare, and co-creating a data visualization on broadband adoption presented to local government staff. Our in-person communities are a great way to learn and build local public interest tech without becoming a full-on Labs contributor.

For those in Austin and Central Texas that want to get involved in-person, how can they plug-in?

If you can only come to one event for the rest of the year, come to our Open Austin’s 2025 Year-End Celebration. Open Austin members and our freshly graduated Civic Data Fellow cohort will give lightning talks to share how they’ve supported local social advocacy through open source software and open data work. Otherwise, come to a monthly remote volunteer orientation call. There, we'll share how to get involved in our in-person Communities of Civic Practice and our remote Civic Digital Labs (i.e. building open source software).

Open Austin welcomes volunteers from all backgrounds, including those with skills in marketing, fundraising, communications, and operations– not just technologists. You can make a difference in various ways. Come to a remote volunteer orientation call to learn more. And, as always, donate. Running multiple open source projects for structured workforce development is expensive, and your contributions help sustain Open Austin's work in the community. Please visit our donation page for ways to give.

Thanks, EFF!

Christopher Vines

Join Your Fellow Digital Rights Supporters for the EFF Awards on September 10!

2 weeks 3 days ago

For over 35 years, the Electronic Frontier Foundation has presented awards recognizing key leaders and organizations advancing innovation and championing digital rights. The EFF Awards celebrate the accomplishments of people working toward a better future for technology users, both in the public eye and behind the scenes.

EFF is pleased to welcome all members of the digital rights community, supporters, and friends to this annual award ceremony. Join us to celebrate this year's honorees with drinks, bytes, and excellent company.

 

EFF Award Ceremony
Wednesday, September 10th, 2025
6:00 PM to 10:00 PM Pacific
San Francisco Design Center Galleria
101 Henry Adams Street, San Francisco, CA

Register Now

General Admission: $55 | Current EFF Members: $45 | Students: $35

The celebration will include a strolling dinner and desserts, as well as a hosted bar with cocktails, mocktails, wine, beer, and non-alcoholic beverages! Vegan, vegetarian, and gluten-free food options will be available. We hope to see you in person, wearing either a signature EFF hoodie, or something formal if you're excited for the opportunity to dress up!

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are proud to present awards to this year's winners:JUST FUTURES LAW

EFF Award for Leading Immigration and Surveillance Litigation

ERIE MEYER

EFF Award for Protecting Americans' Data

SOFTWARE FREEDOM LAW CENTER, INDIA

EFF Award for Defending Digital Freedoms

 More About the 2025 EFF Award Winners

Just Futures Law

Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.

Erie Meyer

Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center

Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

 

Christian Romero

Podcast Episode: Protecting Privacy in Your Brain

2 weeks 4 days ago

The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3955c653-7346-44d2-82e2-0238931bcfd9%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established The Neurorights Foundation, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 

In this episode you’ll learn about:

  • How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation
  • Why neurotechnology regulation needs to be grounded in international human rights
  • Navigating the complex differences between medical and consumer privacy laws
  • The risk that information collected by devices now on the market could be decoded into actual words within just a few years
  • Balancing beneficial innovation with the protection of people’s mental privacy 

Rafael Yuste is a professor of biological sciences and neuroscience, co-director of the Kavli Institute for Brain Science, and director of the NeuroTechnology Center at Columbia University. He led the group of researchers that first proposed the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013 by the Obama Administration. 

Jared Genser is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and he is outside general counsel to The Neurorights Foundation, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

RAFAEL YUSTE: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.
10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at.
Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons.
So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?

CINDY COHN: That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?
And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.

JASON KELLEY: Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.

CINDY COHN: But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.

JASON KELLEY: We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.

CINDY COHN: We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.

RAFAEL YUSTE: I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.
So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work.
And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.

CINDY COHN: I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.

JARED GENSER: Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.
I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.
And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.
Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.
And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.
Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.

JASON KELLEY: So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?

RAFAEL YUSTE: So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI.
And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.
And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.
So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.
It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.
But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.
And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech.
And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory.
So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine.
This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.

JARED GENSER: Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use.
As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.
The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.
And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth.
And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.
Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.
So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text.
Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.
And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.
And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?
And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera.
But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level.
Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.

CINDY COHN: Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.
And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?

JARED GENSER: Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.

CINDY COHN: And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.

JARED GENSER: Absolutely.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Rafael Yuste and Jared Genser.

CINDY COHN: This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.

JARED GENSER: No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.
You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth.
And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.
Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights.
And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.
And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right?
And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right?
So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.

JASON KELLEY: Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard.
But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time.
And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson.
What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.

RAFAEL YUSTE: Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.
All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.
But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.
So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea.
So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement.

JARED GENSER: What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.
We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth.
I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.
The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it.
And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.
And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right?
And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.

CINDY COHN: I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?

JARED GENSER: Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right?
I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?
The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it.
And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?
And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting.
Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right?
And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen.
And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes.
And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.
So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?
So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.

JASON KELLEY: I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.

CINDY COHN: Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.

JASON KELLEY: A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.
Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.

CINDY COHN: Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly.
And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Josh Richman

Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions

3 weeks 2 days ago

EFF legal intern Noam Shemtov was the principal author of this post.

When police have a warrant to search a phone, should they be able to see everything on the phone—from family photos to communications with your doctor to everywhere you’ve been since you first started using the phone—in other words, data that is in no way connected to the crime they’re investigating? The Michigan Supreme Court just ruled no. 

In People v. Carson, the court held that to satisfy the Fourth Amendment, warrants authorizing searches of cell phones and other digital devices must contain express limitations on the data police can review, restricting searches to data that they can establish is clearly connected to the crime.

The realities of modern cell phones call for a strict application of rules governing the scope of warrants.

EFF, along with ACLU National and the ACLU of Michigan, filed an amicus brief in Carson, expressly calling on the court to limit the scope of cell phone search warrants. We explained that the realities of modern cell phones call for a strict application of rules governing the scope of warrants. Without clear limits, warrants would  become de facto licenses to look at everything on the device, a great universe of information that amounts to “the sum of an individual’s private life.” 

The Carson case shows just how broad many cell phone search warrants can be. Defendant Michael Carson was suspected of stealing money from a neighbor’s safe. The warrant to search his phone allowed the police to access:

Any and all data including, text messages, text/picture messages, pictures and videos, address book, any data on the SIM card if applicable, and all records or documents which were created, modified, or stored in electronic or magnetic form and, any data, image, or information.

There were no temporal or subject matter limitations. Consequently, investigators obtained over 1,000 pages of information from Mr. Carson’s phone, the vast majority of which did not have anything to do with the crime under investigation.

The Michigan Supreme Court held that this extremely broad search warrant was “constitutionally intolerable” and violated the particularity requirement of the Fourth Amendment. 

The Fourth Amendment requires that warrants “particularly describ[e] the place to be searched, and the persons or things to be seized.” This is intended to limit authorization to search to the specific areas and things for which there is probable cause to search and to prevent police from conducting “wide-ranging exploratory searches.” 

Cell phones hold vast and varied information, including our most intimate data.

Across two opinions, a four-Justice majority joined a growing national consensus of courts recognizing that, given the immense and ever-growing storage capacity of cell phones, warrants must spell out up-front limitations on the information the government may review, including the dates and data categories that constrain investigators’ authority to search. And magistrates reviewing warrants must ensure the information provided by police in the warrant affidavit properly supports a tailored search.

This ruling is good news for digital privacy. Cell phones hold vast and varied information, including our most intimate data—“privacies of life” like our personal messages, location histories, and medical and financial information. The U.S. Supreme Court has recognized as much, saying that application of Fourth Amendment principles to searches of cell phones must respond to cell phones’ unique characteristics, including the weighty privacy interests in our digital data. 

We applaud the Michigan Supreme Court’s recognition that unfettered cell phone searches pose serious risks to privacy. We hope that courts around the country will follow its lead in concluding that the particularity rule applies with special force to such searches and requires clear limitations on the data the government may access.

Jennifer Pinsof
Checked
5 minutes 26 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed