Open Austin: Reimagining Civic Engagement and Digital Equity in Texas

23 hours 37 minutes ago

The Electronic Frontier Alliance is growing and this year we’ve been honored to welcome Open Austin into the EFA. Open Austin began in 2009 as a meetup that successfully advocated for a city-run open data portal, and relaunched as a 501(c)3 in 2018 dedicated to reimagining civic engagement and digital equity by building volunteer open source projects for local social organizations.

As Central Texas’ oldest and largest grassroots civic tech organization, their work has provided hands-on training for over 1,500 members in the hard and soft skills needed to build digital society, not just scroll through it. Recently, I got the chance to speak with Liani Lye, Executive Director of Open Austin, about the organization, its work, and what lies ahead:

There’s so many exciting things happening with Open Austin. Can you tell us about your Civic Digital Lab and your Data Research Hub?

Open Austin's Civic Digital Lab reimagines civic engagement by training central Texans to build technology for the public good. We build freely, openly, and alongside a local community stakeholder to represent community needs. Our lab currently supports 5 products:

  • Data Research Hub: Answering residents' questions with detailed information about our city
  • Streamlining Austin Public Library’s “book a study room” UX and code
  • Mapping landlords and rental properties to support local tenant rights organizing
  • Promoting public transit by highlighting points of interest along bus routes
  • Creating an interactive exploration of police bodycam data

We’re actively scaling up our Data Research Hub, which started in January 2025 and was inspired by 9b Corp’s Neighborhood Explorer. Through community outreach, we gather residents’ questions about our region and connect the questions with Open Austin’s data analysts. Each answered question adds to a pool of knowledge that equips communities to address local issues. Crucially, the organizing team at EFF, through the EFA, have connected us to local organizations to generate these questions.

Can you discuss your new Civic Data Fellowship cohort and Communities of Civic Practice? 

Launched in 2024, Open Austin’s Civic Data Fellowship trains the next generation of technologically savvy community leaders by pairing aspiring women, people of color, and LGBTQ+ data analysts with mentors to explore Austin’s challenges. These culminate in data projects and talks to advocates and policymakers, which double as powerful portfolio pieces.  While we weren’t able to fully fund Fellow stipends through grants this year, thanks to the generosity of our supporters, we successfully raised 25% through grassroots efforts.

Along with our fellowship and lab, we host monthly Communities of Civic Practice peer-learning circles that build skills for employability and practical civic engagement. Recent sessions include a speaker on service design in healthcare, and co-creating a data visualization on broadband adoption presented to local government staff. Our in-person communities are a great way to learn and build local public interest tech without becoming a full-on Labs contributor.

For those in Austin and Central Texas that want to get involved in-person, how can they plug-in?

If you can only come to one event for the rest of the year, come to our Open Austin’s 2025 Year-End Celebration. Open Austin members plus our freshly graduated Civic Data Fellow cohort will give lightning talks to share how they’ve supported local social advocacy through open source software and open data work. Otherwise, come to a monthly remote volunteer orientation call. There, we'll share how to get involved in our in-person Communities of Civic Practice and our remote Civic Digital Labs (aka, building open source software).

Open Austin welcomes volunteers from all backgrounds, including those with skills in marketing, fundraising, communications, and operations - not just technologists. You can make a difference in various ways. Come to a remote volunteer orientation call to learn more. And, as always, donate. Running multiple open source projects for structured workforce development is expensive, and your contributions help sustain Open Austin's work in the community. Please visit our donation page for ways to give; thanks EFF!

Christopher Vines

Join Your Fellow Digital Rights Supporters for the EFF Awards on September 10!

1 day 23 hours ago

For over 35 years, the Electronic Frontier Foundation has presented awards recognizing key leaders and organizations advancing innovation and championing digital rights. The EFF Awards celebrate the accomplishments of people working toward a better future for technology users, both in the public eye and behind the scenes.

EFF is pleased to welcome all members of the digital rights community, supporters, and friends to this annual award ceremony. Join us to celebrate this year's honorees with drinks, bytes, and excellent company.

 

EFF Award Ceremony
Wednesday, September 10th, 2025
6:00 PM to 10:00 PM Pacific
San Francisco Design Center Galleria
101 Henry Adams Street, San Francisco, CA

Register Now

General Admission: $55 | Current EFF Members: $45 | Students: $35

The celebration will include a strolling dinner and desserts, as well as a hosted bar with cocktails, mocktails, wine, beer, and non-alcoholic beverages! Vegan, vegetarian, and gluten-free food options will be available. We hope to see you in person, wearing either a signature EFF hoodie, or something formal if you're excited for the opportunity to dress up!

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are proud to present awards to this year's winners:JUST FUTURES LAW

EFF Award for Leading Immigration and Surveillance Litigation

ERIE MEYER

EFF Award for Protecting Americans' Data

SOFTWARE FREEDOM LAW CENTER, INDIA

EFF Award for Defending Digital Freedoms

 More About the 2025 EFF Award Winners

Just Futures Law

Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.

Erie Meyer

Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center

Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

 

Christian Romero

Podcast Episode: Protecting Privacy in Your Brain

3 days 15 hours ago

The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3955c653-7346-44d2-82e2-0238931bcfd9%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established The Neurorights Foundation, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 

In this episode you’ll learn about:

  • How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation
  • Why neurotechnology regulation needs to be grounded in international human rights
  • Navigating the complex differences between medical and consumer privacy laws
  • The risk that information collected by devices now on the market could be decoded into actual words within just a few years
  • Balancing beneficial innovation with the protection of people’s mental privacy 

Rafael Yuste is a professor of biological sciences and neuroscience, co-director of the Kavli Institute for Brain Science, and director of the NeuroTechnology Center at Columbia University. He led the group of researchers that first proposed the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013 by the Obama Administration. 

Jared Genser is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and he is outside general counsel to The Neurorights Foundation, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

RAFAEL YUSTE: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.
10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at.
Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons.
So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?

CINDY COHN: That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?
And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.

JASON KELLEY: Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.

CINDY COHN: But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.

JASON KELLEY: We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.

CINDY COHN: We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.

RAFAEL YUSTE: I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.
So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work.
And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.

CINDY COHN: I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.

JARED GENSER: Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.
I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.
And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.
Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.
And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.
Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.

JASON KELLEY: So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?

RAFAEL YUSTE: So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI.
And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.
And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.
So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.
It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.
But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.
And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech.
And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory.
So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine.
This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.

JARED GENSER: Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use.
As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.
The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.
And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth.
And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.
Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.
So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text.
Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.
And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.
And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?
And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera.
But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level.
Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.

CINDY COHN: Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.
And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?

JARED GENSER: Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.

CINDY COHN: And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.

JARED GENSER: Absolutely.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Rafael Yuste and Jared Genser.

CINDY COHN: This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.

JARED GENSER: No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.
You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth.
And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.
Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights.
And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.
And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right?
And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right?
So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.

JASON KELLEY: Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard.
But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time.
And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson.
What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.

RAFAEL YUSTE: Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.
All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.
But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.
So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea.
So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement.

JARED GENSER: What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.
We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth.
I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.
The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it.
And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.
And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right?
And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.

CINDY COHN: I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?

JARED GENSER: Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right?
I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?
The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it.
And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?
And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting.
Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right?
And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen.
And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes.
And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.
So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?
So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.

JASON KELLEY: I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.

CINDY COHN: Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.

JASON KELLEY: A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.
Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.

CINDY COHN: Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly.
And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Josh Richman

Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions

1 week 1 day ago

EFF legal intern Noam Shemtov was the principal author of this post.

When police have a warrant to search a phone, should they be able to see everything on the phone—from family photos to communications with your doctor to everywhere you’ve been since you first started using the phone—in other words, data that is in no way connected to the crime they’re investigating? The Michigan Supreme Court just ruled no. 

In People v. Carson, the court held that to satisfy the Fourth Amendment, warrants authorizing searches of cell phones and other digital devices must contain express limitations on the data police can review, restricting searches to data that they can establish is clearly connected to the crime.

The realities of modern cell phones call for a strict application of rules governing the scope of warrants.

EFF, along with ACLU National and the ACLU of Michigan, filed an amicus brief in Carson, expressly calling on the court to limit the scope of cell phone search warrants. We explained that the realities of modern cell phones call for a strict application of rules governing the scope of warrants. Without clear limits, warrants would  become de facto licenses to look at everything on the device, a great universe of information that amounts to “the sum of an individual’s private life.” 

The Carson case shows just how broad many cell phone search warrants can be. Defendant Michael Carson was suspected of stealing money from a neighbor’s safe. The warrant to search his phone allowed the police to access:

Any and all data including, text messages, text/picture messages, pictures and videos, address book, any data on the SIM card if applicable, and all records or documents which were created, modified, or stored in electronic or magnetic form and, any data, image, or information.

There were no temporal or subject matter limitations. Consequently, investigators obtained over 1,000 pages of information from Mr. Carson’s phone, the vast majority of which did not have anything to do with the crime under investigation.

The Michigan Supreme Court held that this extremely broad search warrant was “constitutionally intolerable” and violated the particularity requirement of the Fourth Amendment. 

The Fourth Amendment requires that warrants “particularly describ[e] the place to be searched, and the persons or things to be seized.” This is intended to limit authorization to search to the specific areas and things for which there is probable cause to search and to prevent police from conducting “wide-ranging exploratory searches.” 

Cell phones hold vast and varied information, including our most intimate data.

Across two opinions, a four-Justice majority joined a growing national consensus of courts recognizing that, given the immense and ever-growing storage capacity of cell phones, warrants must spell out up-front limitations on the information the government may review, including the dates and data categories that constrain investigators’ authority to search. And magistrates reviewing warrants must ensure the information provided by police in the warrant affidavit properly supports a tailored search.

This ruling is good news for digital privacy. Cell phones hold vast and varied information, including our most intimate data—“privacies of life” like our personal messages, location histories, and medical and financial information. The U.S. Supreme Court has recognized as much, saying that application of Fourth Amendment principles to searches of cell phones must respond to cell phones’ unique characteristics, including the weighty privacy interests in our digital data. 

We applaud the Michigan Supreme Court’s recognition that unfettered cell phone searches pose serious risks to privacy. We hope that courts around the country will follow its lead in concluding that the particularity rule applies with special force to such searches and requires clear limitations on the data the government may access.

Jennifer Pinsof

Victory! Pen-Link's Police Tools Are Not Secret

1 week 4 days ago

In a victory for transparency, the government contractor Pen-Link agreed to disclose the prices and descriptions of surveillance products that it sold to a local California Sheriff's office.

The settlement ends a months-long California public records lawsuit with the Electronic Frontier Foundation and the San Joaquin County Sheriff’s Office. The settlement provides further proof that the surveillance tools used by governments are not secret and shouldn’t be treated that way under the law.

Last year, EFF submitted a California public records request to the San Joaquin County Sheriff’s Office for information about its work with Pen-Link and its subsidy Cobwebs Technology. Pen-Link went to court to try to block the disclosure, claiming the names of its products and prices were trade secrets. EFF later entered the case to obtain the records it requested.  

The Records Show the Sheriff Bought Online Monitoring Tools

The records disclosed in the settlement show that in late 2023, the Sheriff’s Office paid $180,000 for a two-year subscription to the Tangles “Web Intelligence Platform,” which is a Cobwebs Technologies product that allows the Sheriff to monitor online activity. The subscription allows the Sheriff to perform hundreds of searches and requests per month. The source of information includes the “Dark Web” and “Webloc,” according to the price quotation. According to the settlement, the Sheriff’s Office was offered but did not purchase a series of other add-ons including “AI Image processing” and “Webloc Geo source data per user/Seat.”

Have you been blocked from receiving similar information? We’d like to hear from you.

The intelligence platform overall has been described in other documents as analyzing data from the “open, deep, and dark web, to mobile and social.” And Webloc has been described as a platform that “provides access to vast amounts of location-based data in any specified geographic location.” Journalists at multiple news outlets have chronicled Pen-Link's technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. Major local, state, and federal agencies use Pen-Link's technology.

The records also show that in late 2022 the Sheriff’s Office purchased some of Pen-Link’s more traditional products that help law enforcement execute and analyze data from wiretaps and pen-registers after a court grants approval. 

Government Surveillance Tools Are Not Trade Secrets

The public has a right to know what surveillance tools the government is using, no matter whether the government develops its own products or purchases them from private contractors. There are a host of policy, legal, and factual reasons that the surveillance tools sold by contractors like Pen-Link are not trade secrets.

Public information about these products and prices helps communities have informed conversations and make decisions about how their government should operate. In this case, Pen-Link argued that its products and prices are trade secrets partially because governments rely on the company to “keep their data analysis capabilities private.” The company argued that clients would “lose trust” and governments may avoid “purchasing certain services” if the purchases were made public. This troubling claim highlights the importance of transparency. The public should be skeptical of any government tool that relies on secrecy to operate.

Information about these tools is also essential for defendants and criminal defense attorneys, who have the right to discover when these tools are used during an investigation. In support of its trade secret claim, Pen-Link cited terms of service that purported to restrict the government from disclosing its use of this technology without the company’s consent. Terms like this cannot be used to circumvent the public’s right to know, and governments should not agree to them.

Finally, in order for surveillance tools and their prices to be protected as a trade secret under the law, they have to actually be secret. However, Pen-Link’s tools and their prices are already public across the internet—in previous public records disclosures, product descriptions, trademark applications, and government websites.

 Lessons Learned

Government surveillance contractors should consider the policy implications, reputational risks, and waste of time and resources when attempting to hide from the public the full terms of their sales to law enforcement.

Cases like these, known as reverse-public records act lawsuits, are troubling because a well-resourced company can frustrate public access by merely filing the case. Not every member of the public, researcher, or journalist can afford to litigate their public records request. Without a team of internal staff attorneys, it would have cost EFF tens of thousands of dollars to fight this lawsuit.

 Luckily in this case, EFF had the ability to fight back. And we will continue our surveillance transparency work. That is why EFF required some attorneys’ fees to be part of the final settlement.

Related Cases: Pen-Link v. County of San Joaquin Sheriff’s Office
Mario Trujillo

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

1 week 5 days ago

The Ninth Circuit upheld an important limitation on Digital Millenium Copyright Act (DMCA) subpoenas that other federal courts have recognized for more than two decades. The DMCA, a misguided anti-piracy law passed in the late nineties, created a bevy of powerful tools, ostensibly to help copyright holders fight online infringement. Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls,” unscrupulous litigants who abuse the system at everyone else’s expense.

The DMCA’s “notice and takedown” regime is one of these tools. Section 512 of the DMCA creates “safe harbors” that protect service providers from liability, so long as they disable access to content when a copyright holder notifies them that the content is infringing, and fulfill some other requirements. This gives copyright holders a quick and easy way to censor allegedly infringing content without going to court. 

Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls”

Section 512(h) is ostensibly designed to facilitate this system, by giving rightsholders a fast and easy way of identifying anonymous infringers. Section 512(h) allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users, just by asking a court clerk to issue one, and attaching a copy of the infringement notice. In other words, they can wield the court’s power to override an internet user’s right to anonymous speech, without permission from a judge.  It’s easy to see why these subpoenas are prone to misuse.

Internet service providers (ISPs)—the companies that provide an internet connection (e.g. broadband or fiber) to customers—are obvious targets for these subpoenas. Often, copyright holders know the Internet Protocol (IP) address of an alleged infringer, but not their name or contact information. Since ISPs assign IP addresses to customers, they can often identify the customer associated with one.

Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief.

As the Ninth Circuit held:

Because a § 512(a) service provider cannot remove or disable access to infringing content, it cannot receive a valid (c)(3)(A) notification, which is a prerequisite for a § 512(h) subpoena. We therefore conclude from the text of the DMCA that a § 512(h) subpoena cannot issue to a § 512(a) service provider as a matter of law.

This decision preserves the understanding of Section 512(h) that internet users, websites, and copyright holders have shared for decades. As EFF explained to the court in its amicus brief:

[This] ensures important procedural safeguards for internet users against a group of copyright holders who seek to monetize frequent litigation (or threats of litigation) by coercing settlements—copyright trolls. Affirming the district court and upholding the interpretation of the D.C. and Eighth Circuits will preserve this protection, while still allowing rightsholders the ability to find and sue infringers.

EFF applauds this decision. And because three federal appeals courts have all ruled the same way on this question—and none have disagreed—ISPs all over the country can feel confident about protecting their customers’ privacy by simply throwing improper DMCA 512(h) subpoenas in the trash.

Tori Noble

From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet

1 week 5 days ago

If you've read about the sudden appearance of age verification across the internet in the UK and thought it would never happen in the U.S., take note: many politicians want the same or even more strict laws. As of July 1st, South Dakota and Wyoming enacted laws requiring any website that hosts any sexual content to implement age verification measures. These laws would potentially capture a broad range of non-pornographic content, including classic literature and art, and expose a wide range of platforms, of all sizes, to civil or criminal liability for not using age verification on every user. That includes social media networks like X, Reddit, and Discord; online retailers like Amazon and Barnes & Noble; and streaming platforms like Netflix and Rumble—essentially, any site that allows user-generated or published content without gatekeeping access based on age.

These laws expand on the flawed logic from last month’s troubling Supreme Court decision,  Free Speech Coalition v. Paxton, which gave Texas the green light to require age verification for sites where at least one-third (33.3%) of the content is sexual materials deemed “harmful to minors.” Wyoming and South Dakota seem to interpret this decision to give them license to require age verification—and potential legal liability—for any website that contains ANY image, video, or post that contains sexual content that could be interpreted as harmful to minors. Platforms or websites may be able to comply by implementing an “age gate” within certain sections of their sites where, for example, user-generated content is allowed, or at the point of entry to the entire site.

Although these laws are in effect, we do not believe the Supreme Court’s decision in FSC v. Paxton gives these laws any constitutional legitimacy. You do not need a law degree to see the difference between the Texas law—which targets sites where a substantial portion (one third) of content is “sexual material harmful to minors”—and these laws, which apply to any site that contains even a single instance of such material. In practice, it is the difference between burdening adults with age gates for websites that host “adult” content, and burdening the entire internet, including sites that allow user-generated content or published content.

The law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands

But lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” and use other methods to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature. Books like The Bluest Eye by Toni Morrison, The Handmaid’s Tale by Margaret Atwood, and And Tango Makes Three have all been swept up in these crusades—not because of their overall content, but because of isolated scenes or references.

Wyoming’s law is also particularly extreme: rather than provide enforcement by the Attorney General, HB0043 is a “bounty” law that deputizes any resident with a child to file civil lawsuits against websites they believe are in violation, effectively turning anyone into a potential content cop. There is no central agency, no regulatory oversight, and no clear standard. Instead, the law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands by suing websites that contain a single example of objectionable content. Though most other state age-verification laws often allow individuals to make reports to state Attorneys General who are responsible for enforcement, and some include a private right of action allowing parents or guardians to file civil claims for damages, the Wyoming law is similar to laws in Louisiana and Utah that rely entirely on civil enforcement. 

This is a textbook example of a “heckler’s veto,” where a single person can unilaterally decide what content the public is allowed to access. However, it is clear that the Wyoming legislature explicitly designed the law this way in a deliberate effort to sidestep state enforcement and avoid an early constitutional court challenge, as many other bounty laws targeting people who assist in abortions, drag performers, and trans people have done. The result? An open invitation from the Wyoming legislature to weaponize its citizens, and the courts, against platforms, big or small. Because when nearly anyone can sue any website over any content they deem unsafe for minors, the result isn’t safety. It’s censorship.

That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

Imagine a Wyomingite stumbling across an NSFW subreddit or a Tumblr fanfic blog and deciding it violates the law. If they were a parent of a minor, that resident could sue the platform, potentially forcing those websites to restrict or geo-block access to the entire state in order to avoid the cost and risk of litigation. And because there’s no threshold for how much “harmful” content a site must host, a single image or passage could be enough. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

This law will likely be challenged, and eventually, halted, by the courts. But given that the state cannot enforce it, those challenges will not come until a parent sues a website. Until then, its mere existence poses a serious threat to free speech online. Risk-averse platforms may over-correct, over-censor, or even restrict access to the state entirely just to avoid the possibility of a lawsuit, as Pornhub has already done. And should sites impose age-verification schemes to comply, they will be a speech and privacy disaster for all state residents.

And let’s be clear: these state laws are not outliers. They are part of a growing political movement to redefine terms like “obscene,” “pornographic,” and “sexually explicit”  as catchalls to restrict content for both adults and young people alike. What starts in one state and one lawsuit can quickly become a national blueprint. 

If we don’t push back now, the internet as we know it could disappear behind a wall of fear and censorship.

Age-verification laws like these have relied on vague language, intimidating enforcement mechanisms, and public complacency to take root. Courts may eventually strike them down, but in the meantime, users, platforms, creators, and digital rights advocacy groups need to stay alert, speak up against these laws, and push back while they can. When governments expand censorship and surveillance offline, it's our job at EFF to protect your access to a free and open internet. Because if we don’t push back now, the internet as we know it— the messy, diverse, and open internet we know—could disappear behind a wall of fear and censorship.

Ready to join us? Urge your state lawmakers to reject harmful age-verification laws. Call or email your representatives to oppose KOSA and any other proposed federal age-checking mandates. Make your voice heard by talking to your friends and family about what we all stand to lose if the age-gated internet becomes a global reality. Because the fight for a free internet starts with us.

Rindala Alajaji

New Documents Show First Trump DOJ Worked With Congress to Amend Section 230

2 weeks 1 day ago

In the wake of rolling out its own proposal to significantly limit a key law protecting internet users’ speech in the summer of 2020, the Department of Justice under the first Trump administration actively worked with lawmakers to support further efforts to stifle online speech.

The new documents, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials were talking with Senate staffers working to pass speech- and privacy-chilling bills like the EARN IT Act and PACT Act (neither became law). DOJ officials also communicated with an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content.

Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies the principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.

DOJ’s work to weaken Section 230 began before President Donald Trump issued an executive order targeting social media services in 2020, and officials in DOJ appeared to be blindsided by the order. EFF was counsel to plaintiffs who challenged the order, and President Joe Biden later rescinded it. EFF filed two FOIA suits seeking records about the executive order and the DOJ’s work to weaken Section 230.

The DOJ’s latest release provides more detail on a general theme that has been apparent for years: that the DOJ in 2020 flexed its powers to try to undermine or rewrite Section 230. The documents show that in addition to meeting with congressional staffers, DOJ was critical of a proposed amendment to the EARN IT Act, with one official stating that it “completely undermines” the sponsors’ argument for rejecting DOJ’s proposal to exempt so-called “Bad Samaritan” websites from Section 230.

Further, DOJ reviewed and proposed edits to a rulemaking petition to the Federal Communications Commission that tried to reinterpret Section 230. That effort never moved forward given the FCC lacked any legal authority to reinterpret the law.

You can read the latest release of documents here, and all the documents released in this case are here.

Related Cases: EFF v. OMB (Trump 230 Executive Order FOIA)
Aaron Mackey

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

2 weeks 1 day ago

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Less Accuracy, More Bias and Discrimination

It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them. 

This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. 

These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. 

Biased LLMs Cause Serious Harm—Especially in the Hands of the Government

But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.

We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. 

And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. 

We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.

Tori Noble

🫥 Spotify Face Scans Are Just the Beginning | EFFector 37.10

2 weeks 3 days ago

Catching up on your backlog of digital rights news has never been easier! EFF has a one-stop-shop to keep you up to date on the latest in the fight against censorship and surveillance—our EFFector newsletter.

This time we're covering an act of government intimidation in Florida when the state subpoenaed a venue for surveillance video after hosting an LGBTQ+ pride event, calling out data brokers in California for failing to respond to requests for personal data—even though responses are required by state law, and explaining why Canada's Bill C-2 would open the floodgates for U.S. surveillance.

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Speech and Privacy Activist Paige Collings covers the harms of age verification measures that are being passed across the globe. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.10 - Spotify Face Scans Are Just the Beginning

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Torture Victim’s Landmark Hacking Lawsuit Against Spyware Maker Can Proceed, Judge Rules

2 weeks 3 days ago
EFF is Co-Counsel in Case Detailing Harms Caused by Export of U.S. Cybersurveillance Technology and Training to Repressive Regimes

PORTLAND, OR – Saudi human rights activist Loujain Alhathloul’s groundbreaking lawsuit concerning spying software that enabled her imprisonment and torture can advance, a federal judge ruled in an opinion unsealed Tuesday

U.S. District Judge Karin J. Immergut of the District of Oregon ruled that Alhathloul’s lawsuit against DarkMatter Group and three of its former executives can proceed on its claims under the Computer Fraud and Abuse Act – the first time that a human rights case like this has gone so far under this law. The judge dismissed other claims made under the Alien Tort Statute. 

Alhathloul is represented in the case by the Electronic Frontier Foundation (EFF), the Center for Justice and Accountability, Foley Hoag, and Tonkon Torp LLP

"This important ruling is the first to let a lawsuit filed by the victim of a foreign government’s human rights abuses, enabled by U.S. spyware used to hack the victim’s devices, proceed in our federal courts,” said EFF Civil Liberties Director David Greene. “This case is particularly important at a time when transnational human rights abuses are making daily headlines, and we are eager to proceed with proving our case.” 

“Transparency in such times and circumstances is a cornerstone that enacts integrity and drives accountability as it offers the necessary information to understand our reality and act upon it. The latter presents a roadmap to a safer world,” Alhathloul said. “Today’s judge’s order has become a public court document only to reinforce those rooted concepts of transparency that will one day lead to accountability.” 

Alhathloul, 36, a nominee for the 2019 and 2020 Nobel Peace Prize, has been a powerful advocate for women’s rights in Saudi Arabia for more than a decade. She was at the forefront of the public campaign advocating for women’s right to drive in Saudi Arabia and has been a vocal critic of the country’s male guardianship system.  

The lawsuit alleges that defendants DarkMatter Group, Marc Baier, Ryan Adams, and Daniel Gericke were hired by the UAE to target Alhathloul and other perceived dissidents as part of the UAE’s broader cooperation with Saudi Arabia. According to the lawsuit, the defendants used U.S. cybersurveillance technology, along with their U.S. intelligence training, to install spyware on Alhathloul’s iPhone and extract data from it, including while she was in the United States and communicating with U.S. contacts. After the hack, Alhathloul was arbitrarily detained by the UAE security services and forcibly rendered to Saudi Arabia, where she was imprisoned and tortured. She is no longer in prison, but she is currently subject to an illegal travel ban and unable to leave Saudi Arabia. 

The case was filed in December 2021; Judge Immergut dismissed it in March 2023 with leave to amend, and the amended complaint was filed in May 2023.  

“This Court concludes that Plaintiff has shown that her claims arise out of Defendants’ forum-related contacts,” Judge Immergut wrote in her opinion. “Defendants’ forum-related contacts include (1) their alleged tortious exfiltration of data from Plaintiff’s iPhone while she was in the U.S. and (2) their acquisition, use, and enhancement of U.S.-created exploits from U.S. companies to create the Karma hacking tool used to accomplish their tortious conduct. Plaintiff’s CFAA claims arise out of these U.S. contacts.” 

For the judge’s opinion:  https://www.eff.org/document/alhathloul-v-darkmatter-opinion-and-order-motion-dismiss

For more about the case: https://www.eff.org/cases/alhathloul-v-darkmatter-group 

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org
Josh Richman

Podcast Episode: Separating AI Hope from AI Hype

2 weeks 3 days ago

If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F49181a0e-f8b4-4b2a-ae07-f087ecea2ddd%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

    

(You can also find this episode on the Internet Archive and on YouTube.) 

 Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 

In this episode you’ll learn about:

  • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
  • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
  • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
  • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
  • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line 

Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University's Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton's Web Transparency and Accountability Project, uncovering how companies collect and use our personal information. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

ARVIND NARAYANAN: The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they're vastly, vastly superhuman.
We don't think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It's not even clear when you've done well and when you've not done well.
We think that human performance is not limited by our biology. It's limited by our state of knowledge of the world, for instance. So the reason we're not better doctors is not because we're not computing fast enough, it's just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.
And the other is you've just hit the ceiling of performance. The reason people are not necessarily better writers is that it's not even clear what it means to be a better writer. It's not as if there's gonna be a magic piece of text, you know, that's gonna, like persuade you of something that you never wanted to believe, for instance, right?
We don't think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.

CINDY COHN: That's Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN: On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.

JASON KELLEY: And our guest is one of the most level-headed and reassuring voices in tech.

CINDY COHN: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.
He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.

ARVIND NARAYANAN: I think there are multiple kinds of techno-optimism. There's the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I'm not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is.
So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That's a true story.

CINDY COHN: Oh my God. Whoops.

ARVIND NARAYANAN: And, you know, there weren't any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.
That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science.
Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.
Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.

CINDY COHN: So let's drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.
So, you know, from I think all around the world, there's this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it's that same moment and, I think that that is the promise that we have to hang on to.
So what would an educational world look like? You know, if you're a student or a teacher, if we are getting AI right?

ARVIND NARAYANAN: Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.
Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that's the right frame of mind to be in anytime you're learning anything, I think so that's one kind of adaptation.
But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that's one simple example to make the point that a lot of this is not about AI itself, but how we use AI.
More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like.
But even in my experiences with my own kids, right, they're five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right?
So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.

JASON KELLEY: This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you're talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.
And we've talked a little bit on EFF’s Deep Links blog about how, you know, that's probably an overstep in terms of like, people need to know how to use this, whether they're students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.
So do you think schools, you know, given the way you see it, are well positioned to get to the point you're describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you're describing because most teachers are overwhelmed as it is.

ARVIND NARAYANAN: Exactly. That's the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there's less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can't change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there's a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.

JASON KELLEY: Yeah.

ARVIND NARAYANAN: You can't answer the question at that high level because you haven't specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you're, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you're measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.
And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you're gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.

CINDY COHN: I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that's way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we're most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?

ARVIND NARAYANAN: That's our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people's lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they're predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.
The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There's a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.
So that's the technical aspect, and that's because, you know, it's just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.
It's something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.

CINDY COHN: The other piece that I've seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn't tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.
And we know there's a big difference between the crime that the cops respond to and the general crime. So it's gonna look like the people who commit crimes are the people who always commit crimes when it's just the subset that the police are able to focus on, and we know there's a lot of bias baked into that as well.
So it's not just inside the data, it's outside the data that you have to think about in terms of these prediction algorithms and what they're capturing and what they're not. Is that fair?

ARVIND NARAYANAN: That's totally, yeah, that's exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it's not the same morally problematic kind of use where you're denying someone their freedom. But a lot of the same pitfalls apply.
I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They're not able to manually go through all of them. So they want to try to automate the process. But that's not actually addressing what is broken about the system, and when they're doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it's only escalating the arms race, right?
I think the reason this is broken is that we fundamentally don't have good ways of knowing who's going to be a good fit for which position, and so by pretending that we can predict it with AI, we're just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.
Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.
So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I'm not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.

JASON KELLEY: One of the themes that you bring up in the newsletter and the book is AI evaluation. Let's say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don't have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?

ARVIND NARAYANAN: There are a few big limitations here. Let's say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.

JASON KELLEY: Hopefully!

ARVIND NARAYANAN: Yeah. Um, yeah, exactly. They often don't wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it's really the same problem as the medical system has in figuring out whether a drug works or not.
And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.
Right. So that's actually what it takes. And, you know, there's just no incentive in most companies to do this because obviously they don't value knowledge for their own sake. And the ROI is just not worth it. The effort that they're gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.
It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we're pretty far from having a cultural understanding that this is the sort of thing that's necessary.
And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it's in criminal justice, hiring, wherever it is. So I think that'll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They're gonna be very hard to do.

JASON KELLEY: Let's take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Arvind Narayanan.

CINDY COHN: So let's go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they're really focused on the, you know, robots are gonna kill us. All kind of concerns. 'cause that's a, that's a piece of this story as well. And I'd love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.

ARVIND NARAYANAN: Sure. Yeah. So there's uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I'm glad that folks are studying AI safety and the kinds of unusual, let's say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently.
But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who's gonna download these systems and what they're gonna do with them.
So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.
And to even try to enforce that you're kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.
Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.
So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn't release it on an open weights basis.
That's a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that's how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you're pushing for.
And for that purpose, it doesn't have to be that convincing or that deceptive, it just has to be cheap fakes as it's called. It's the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we're seeing are these kinds of cheap fakes that don't even require that kind of sophistication to produce, right?
So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI.
And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.
So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we're talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.

CINDY COHN: Can you talk a little bit about the AI as normal technology? Because I think this is a world that we're headed into that you've been thinking about a little more. 'cause we're, you know, we're not going back.
Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. 'cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it's just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?

ARVIND NARAYANAN: Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today's economy at least, and asks, how quickly will this happen? What are the effects going to be?
So a lot of people who think this will happen, think that it's gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn't mean that human labor was superfluous, because we don't take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?
So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.
And so for us, that's a, that's a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it's not that across the board you're gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don't really see that happening, and we also don't see this happening in the space of a few years.
We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it's not going to be as scary as a lot of people seem to think.

CINDY COHN: So let's say we're living in the future, the Arvind future where we've gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?

ARVIND NARAYANAN: Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It's kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else.
There is this famous definition of AI that AI is whatever hasn't been done yet. So what that means is that when a technology is new and it's not working that well and its effects are double-edged, that's when we're more likely to call it AI.
But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that's gonna happen with generative AI to a large degree. It's just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that's happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation.
I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let's automate government and replace it with a chat bot. Uh, you know, we point out that that's missing the point of democracy, which is to, you know, it's if a chat bot is making decisions, it might be more efficient in some sense, but it's not in any way reflecting the will of the people. So whatever people's concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.
Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there's orders of magnitude, more human potential to open up and AI is not a magic bullet here.
You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.
And there's, there's just an enormous range. And so being able to improve human potential, to me is the most exciting thing.

CINDY COHN: Thank you so much, Arvind.

ARVIND NARAYANAN: Thank you Jason and Cindy. This has been really, really fun.

CINDY COHN:  I really appreciate Arvind's hopeful and correct idea that actually what most of us do all day isn't really reducible to something a machine can replace. That, you know, real life just isn't like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there's a huge gap between, you know, the actual job and the thing that the AI can replicate.

JASON KELLEY:  Yeah, and he's really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it's sort of like asking if food is good for you, are vehicles good for you, but he's much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you're using it for.
It's not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he's very critical of that, which I think is, is good and, and most people are too, but it's happening anyway. So it's good to hear someone who's really thinking about it this way point out why that's incorrect.

CINDY COHN:  I think that's right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it's bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly.
I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he's not saying that it won't, but he's pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.
And anybody who thinks they've checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don't get too excited about the possibilities of AI, but we also don't go all the way to the, the doomers side.

JASON KELLEY:  Yeah. You know, the normal technology thing was really helpful for me, right? It's something that, like you said with computers, it's a tool that, that has applications in some cases and not others, and people thinking, you know, I don't know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it's been many years now and we're still learning how to make the internet useful, and I think it'll be a long time before we've necessarily figure out how AI can be useful. But there's a lot of lessons we can take away from the growth of the internet about how to apply AI.
You know, my dishwasher, I don't think needs to have wifi. I don't think it needs to have AI either. I'll probably end up buying one that has to have those things because that's the way the market goes. But it seems like these are things we can learn from the way we've sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.

CINDY COHN:  Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don't have an open market for systems where you can decide, I don't want AI in my dishwasher, or I don't want surveillance in my television.
And that's a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn't solve problems with broken institutions. And I think it circles back to the fact that we don't have a functional market, we don't have real consumer choice right now. And so that's why some of the fears about AI, it's not just consumers, I mean worker choice, other things as well, it's the problems in those systems in the way power works in those systems.
If you just center this on the tech, you're kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow's declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right?
And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we're headed in the right direction or not, despite what we, what we hope for.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

 

Josh Richman

Fake Clinics Quietly Edit Their Websites After Being Called Out on HIPAA Claims

2 weeks 4 days ago

In a promising sign that public pressure works, several crisis pregnancy centers (CPCs, also known as “fake clinics”) have quietly scrubbed misleading language about privacy protections from their websites. 

Earlier this year, EFF sent complaints to attorneys general in eight states (FL, TX, AR, and MO, TN, OK, NE, and NC), asking them to investigate these centers for misleading the public with false claims about their privacy practices—specifically, falsely stating or implying that they are bound by the Health Insurance Portability and Accountability Act (HIPAA). These claims are especially deceptive because many of these centers are not licensed medical clinics or do not have any medical providers on staff, and thus are not subject to HIPAA’s protections.

Now, after an internal follow-up investigation, we’ve found that our efforts are already bearing fruit: Of the 21 CPCs we cited as exhibits in our complaints, six have completely removed HIPAA references from their websites, and one has made partial changes (removed one of two misleading claims). Notably, every center we flagged in our letters to Texas AG Ken Paxton and Arkansas AG Tim Griffin has updated its website—a clear sign that clinics in these states are responding to scrutiny.

While 14 remain unchanged, this is a promising development. These centers are clearly paying attention—and changing their messaging. We haven’t yet received substantive responses from the state attorneys general beyond formal acknowledgements of our complaints, but these early results confirm what we’ve long believed: transparency and public pressure work.

These changes (often quiet edits to privacy policies on their websites or deleting blog posts) signal that the CPC network is trying to clean up their public-facing language in the wake of scrutiny. But removing HIPAA references from a website doesn’t mean the underlying privacy issues have been fixed. Most CPCs are still not subject to HIPAA, because they are not licensed healthcare providers. They continue to collect sensitive information without clearly disclosing how it’s stored, used, or shared. And in the absence of strong federal privacy laws, there is little recourse for people whose data is misused. 

These clinics have misled patients who are often navigating complex and emotional decisions about their health, misrepresented themselves as bound by federal privacy law, and falsely referred people to the U.S. Department of Health and Human Services for redress—implying legal oversight and accountability. They made patients believe their sensitive data was protected, when in many cases, it was shared with affiliated networks, or even put on the internet for anyone to see—including churches or political organizations.

That’s why we continue to monitor these centers—and call on state attorneys general to do the same. 

Rindala Alajaji

Americans, Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout

3 weeks 1 day ago

Age verification has officially arrived in the UK thanks to the Online Safety Act (OSA), a UK law requiring online platforms to check that all UK-based users are at least eighteen years old before allowing them to access broad categories of “harmful” content that go far beyond graphic sexual content. EFF has extensively criticized the OSA for eroding privacy, chilling speech, and undermining the safety of the children it aims to protect. Now that it’s gone into effect, these countless problems have begun to reveal themselves, and the absurd, disastrous outcome illustrates why we must work to avoid this age-verified future at all costs.

Perhaps you’ve seen the memes as large platforms like Spotify and YouTube attempt to comply with the OSA, while smaller sites—like forums focused on parenting, green living, and gaming on Linux—either shut down or cease some operations rather than face massive fines for not following the law’s vague, expensive, and complicated rules and risk assessments. 

But even Reddit, a site that prizes anonymity and has regularly demonstrated its commitment to digital rights, was doomed to fail in its attempt to comply with the OSA. Though Reddit is not alone in bowing to the UK mandates, it provides a perfect case study and a particularly instructive glimpse of what the age-verified future would look like if we don’t take steps to stop it.

It’s Not Just Porn—LGBTQ+, Public Health, and Politics Forums All Behind Age Gates

On July 25, users in the UK were shocked and rightfully revolted to discover that their favorite Reddit communities were now locked behind age verification walls. Under the new policies, UK Redditors were asked to submit a photo of their government ID and/or a live selfie to Persona, the for-profit vendor that Reddit contracts with to provide age verification services. 

For many, this was the first time they realized what the OSA would actually mean in practice—and the outrage was immediate. As soon as the policy took effect, reports emerged from users that subreddits dedicated to LGBTQ+ identity and support, global journalism and conflict reporting, and even public health-related forums like r/periods, r/stopsmoking, and r/sexualassault were walled off to unverified users. A few more absurd examples of the communities that were blocked off, according to users, include: r/poker, r/vexillology (the study of flags), r/worldwar2, r/earwax, r/popping (the home of grossly satisfying pimple-popping content), and r/rickroll (yup). This is, again, exactly what digital rights advocates warned about. 

Every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

The OSA defines "harmful" in multiple ways that go far beyond pornography, so the obstacles the UK users are experiencing are exactly what the law intended. Like other online age restrictions, the OSA obstructs way more than kids’ access to clearly adult sites. When fines are at stake, platforms will always default to overcensoring. So every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? 

Again, the fact that the OSA has forced Reddit, the “heart of the internet,” to overcensor user-generated content is noteworthy. Reddit has historically succeeded where many others have failed in safeguarding digital rights—particularly the free speech and privacy of its users. It may not be perfect, but Reddit has worked harder than many large platforms to defend Section 230, a key law in the US protecting free speech online. It was one of the first platforms to endorse the Santa Clara Principles, and it was the only platform to receive every star in EFF’s 2019 “Who Has Your Back” (Censorship Edition) report due to its unique approach to moderation, its commitment to notice and appeals of moderation decisions, and its transparency regarding government takedown requests. Reddit’s users are particularly active in the digital rights world: in 2012, they helped EFF and other advocates defeat SOPA/PIPA, a dangerous censorship law. Redditors were key in forcing members of Congress to take a stand against the bill, and were the first to declare a “blackout day,” a historic moment of online advocacy in which over a hundred thousand websites went dark to protest the bill. And Reddit is the only major social media platform where EFF doesn’t regularly share our work—because its users generally do so on their own. 

If a platform with a history of fighting for digital rights is forced to overcensor, how will the rest of the internet look if age verification spreads? Reddit’s attempts to comply with the OSA show the urgency of fighting these mandates on every front. 

We cannot accept these widespread censorship regimes as our new norm. 

Rollout Chaos: The Tech Doesn’t Even Work! 

In the days after the OSA became effective, backlash to the new age verification measures spread across the internet like wildfire as UK users made their hatred of these new policies clear. VPN usage in the UK soared, over 500,000 people signed a petition to repeal the OSA, and some shrewd users even discovered that video game face filters and meme images could fool Persona’s verification software. But these loopholes aren’t likely to last long, as we can expect the age-checking technology to continuously adapt to new evasion tactics. As good as they may be, VPNs cannot save us from the harms of age verification. 

In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

Even when the workarounds inevitably cease to function and the age-checking procedures calcify, age verification measures still will not achieve their singular goal of protecting kids from so-called “harmful” online content. Teenagers will, uh, find a way to access the content they want. Instead of going to a vetted site like Pornhub for explicit material, curious young people (and anyone else who does not or cannot submit to age checks) will be pushed to the sketchier corners of the internet—where there is less moderation, more safety risk, and no regulation to prevent things like CSAM or non-consensual sexual content. In effect, the OSA and other age verification mandates like it will increase the risk of harm, not reduce it. 

If that weren’t enough, the slew of practical issues that have accompanied Reddit’s rollout also reveals the inadequacy of age verification technology to meet our current moment. For example, users reported various bugs in the age-checking process, like being locked out or asked repeatedly for ID despite complying. UK-based subreddit moderators also reported facing difficulties either viewing NSFW post submissions or vetting users’ post history, even when the particular submission or subreddit in question was entirely SFW. 

Taking all of this together, it is excessively clear that age-gating the internet is not the solution to kids’ online safety. Whether due to issues with the discriminatory and error-prone technology, or simply because they lack either a government ID or personal device of their own, millions of UK internet users will be completely locked out of important social, political, and creative communities. If we allow age verification, we welcome new levels of censorship and surveillance with it—while further lining the pockets of big tech and the slew of for-profit age verification vendors that have popped up to fill this market void.

Americans, Take Heed: It Will Happen Here Too

The UK age verification rollout, chaotic as it is, is a proving ground for platforms that are looking ahead to implementing these measures on a global scale. In the US, there’s never been a better time to get educated and get loud about the dangers of this legislation. EFF has sounded this alarm before, but Reddit’s attempts to comply with the OSA show its urgency: age verification mandates are censorship regimes, and in the US, porn is just the tip of the iceberg

US legislators have been disarmingly explicit about their intentions to use restrictions on sexually explicit content as a Trojan horse that will eventually help them censor all sorts of other perfectly legal (and largely uncontroversial) content. We’ve already seen them move the goalposts from porn to transgender and other LGBTQ+ content. What’s next? Sexual education materials, reproductive rights information, DEI or “critical race theory” resources—the list goes on. Under KOSA, which last session passed the Senate with an enormous majority but did not make it to the House, we would likely see similar results here that we see in the UK under the OSA.

Nearly half of U.S. states have some sort of online age restrictions in place already, and the Supreme Court recently paved the way for even more age blocks on online sexual content. But Americans—including those under 18—still have a First Amendment right to view content that is not sexually explicit, and EFF will continue to push back against any legislation that expands the age mandates beyond porn, in statehouses, in courts, and in the streets. 

What can you do?

Call or email your representatives to oppose KOSA and any other federal age-checking mandate. Tell your state lawmakers, wherever you are, to oppose age verification laws. Make your voice heard online, and talk to your friends and family. Tell them about what’s happening to the internet in the UK, and make sure they understand what we all stand to lose—online privacy, security, anonymity, and expression—if the age-gated internet becomes a global reality. EFF is building a coalition to stop this enormous violation of digital rights. Join us today.

Molly Buckley

EFF to Court: Chatbot Output Can Reflect Human Expression

3 weeks 4 days ago

When a technology can have a conversation with you, it’s natural to anthropomorphize that technology—to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people—including judges in cases about AI chatbots—to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.

In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.

Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.

In addition, the right to receive speech in itself is protected—even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.

None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.

We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.

Read our brief below.

Katharine Trendacosta

No Walled Gardens. No Gilded Cages.

3 weeks 4 days ago

Sometimes technology feels like a gilded cage, and you’re not the one holding the key. Most people can’t live off the grid, so how do we stop data brokers who track and exploit you for money? Tech companies that distort what you see and hear? Governments that restrict, censor, and intimidate? No one can do it alone, but EFF was built to protect your rights. With your support, we can take back control.

Join EFF

With 35 years of deep expertise and the support of our members, EFF is delivering bold action to solve the biggest problems facing tech users: suing the government for overstepping their bounds; empowering the people and lawmakers to help them hold the line; and creating free, public interest software toolsguides, and explainers to make the web better.

EFF members enable thousands of hours of our legal work, activism, investigation, and software development for the public good. Join us today.

No Walled Gardens. No Gilded Cages.

Think about it: in the face of rising authoritarianism and invasive surveillance, where would we be without an encrypted web? Your security online depends on researchers, hackers, and creators who are willing to take privacy and free speech rights seriously. That's why EFF will eagerly protect the beating heart of that movement at this week's summer security conferences in Las Vegas. This renowned summit of computer hacking events—BSidesLV, Black Hat USA, and DEF CON—illustrate the key role a community can play in helping you break free of the trappings of technology and retake the reins.

For summer security week, EFF’s DEF CON 33 t-shirt design Beyond the Walled Garden by Hannah Diaz is your gift at the Gold Level membership. Look closer to discover this year’s puzzle challenge! Many thanks to our volunteer puzzlemasters jabberw0nky and Elegin for all their work.

defcon-shirt-frontback-wide.png
A Token of AppreciationBecome a recurring monthly or annual Sustaining Donor this week and you'll get a numbered EFF35 Challenge Coin. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.

Our team is on a relentless mission to protect your civil liberties and human rights wherever they meet tech, but it’s only possible with your help.

Donate Today

Break free of tech’s walled gardens.

Aaron Jue

Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So

3 weeks 4 days ago

The UK is having a moment. In late July, new rules took effect that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from across civil society repeatedly flagged concerns about the impact of this law on both adults’ and children’s rights. Yet politicians in the UK pushed ahead and enacted one of the most contentious age verification mandates that we’ve seen.

The case of safety online is not solved through technology alone.

No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As we’ve been saying for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:

Age Verification Systems Lead to Less Privacy 

Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. 

Once this information is shared to verify a user's age, there’s no way for people to know how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be linked to other unrelated web activity, creating an aggregated profile of the user that grows more valuable as each new data point is added. 

As we argued extensively during the passage of the Online Safety Act, any attempt to protect children online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to make millions. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.

This Isn’t Just About Safety—It’s Censorship

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are required to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it. 

Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as footage of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—asking users for photo ID, and smaller websites closing down entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, Blocked.

We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children. 

Children deserve a more intentional and holistic approach to protecting their safety and privacy online.

People Do Not Want This 

Users in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK. The BBC reported that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar spike in searches for VPNs was evident in January when Florida joined the ever growing list of U.S. states in implementing an age verification mandate on sites that host adult content, including pornography websites like Pornhub. 

Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or a solution to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an authoritarian control on accessing information in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy—a valuable resource for anyone looking to use these tools.

 Alongside increased VPN usage, a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. In its official response to the petition, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with respect, not disdain, and revisit the OSA.

Users Will Be Exposed to Amplified Discrimination 

To check users' ages, three types of systems are typically deployed: age verification, which requires a person to prove their age and identity; age assurance, whereby users are required to prove that they are of a certain age or age range, such as over 18; or age estimation, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through age assurance to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.

From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is susceptible to mistakes and misidentification. Just last year, a legal challenge was launched against the Met Police after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system. 

For age assurance purposes, we know that the technology at best has an error range of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.

Not Everyone Has Access to an ID or Personal Device 

Many advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, millions of people don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left excluded from online speech and will lose access to much of the internet. 

These are primarily lower-income or older people who are often already marginalized, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression. 

The Way Forward 

The case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians in the UK to look into what is best, and not what is easy.

Paige Collings

EFF at the Las Vegas Security Conferences

3 weeks 4 days ago

It’s time for EFF’s annual journey to Las Vegas for the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. Our lawyers, activists, and technologists are always excited to support this community of security researchers and tinkerers—the folks who push computer security forward (and somehow survive the Vegas heat in their signature black hoodies).  

As in past years, EFF attorneys will be on-site to assist speakers and attendees. If you have legal concerns about an upcoming talk or sensitive infosec research—during the Las Vegas conferences or anytime—don’t hesitate to reach out at info@eff.org. Share a brief summary of the issue, and we’ll do our best to connect you with the right resources. You can also learn more about our work supporting technologists on our Coders’ Rights Project page. 

Be sure to swing by the expo areas at all three conferences to say hello to your friendly neighborhood EFF staffers! You’ll probably spot us in the halls, but we’d love for you to stop by our booths to catch up on our latest work, get on our action alerts list, or become an EFF member! For the whole week, we’ll have our limited-edition DEF CON 33 t-shirt on hand—I can’t wait to see them take over each conference! 

defcon-shirt-frontback.png

EFF Staff Presentations

Ask EFF at BSides Las Vegas
At this interactive session, our panelists will share updates on critical digital rights issues and EFF's ongoing efforts to safeguard privacy, combat surveillance, and advocate for freedom of expression.
WHEN: Tuesday, August 5, 15:00
WHERE: Skytalks at the Tuscany Suites Hotel & Casino

Recording PCAPs from Stingrays With a $20 Hotspot
What if you could use Wireshark on the connection between your cellphone and the tower it's connected to? In this talk we present Rayhunter, a cell site simulator detector built on top of a cheap cellular hotspot. 
WHEN: Friday, August 8, 13:30
WHERE: DEF CON, LVCC - L1 - EHW3 - Track 1

Rayhunter Build Clinic
Come out and build EFF's Rayhunter! ($10 materials fee as an EFF donation)
WHEN: Friday, August 8 at 14:30
WHERE: DEF CON, Hackers.Town Community Space

Protect Your Privacy Online and on the Streets with EFF Tools
The Electronic Frontier Foundation (EFF) has been protecting your rights to privacy, free expression, and security online for 35 years! One important way we push for these freedoms is through our free, open source tools. We’ll provide an overview of how these tools work, including Privacy Badger, Rayhunter, Certbot, and Surveillance-Self Defense, and how they can help keep you safe online and on the streets.
WHEN: Friday, August 8 at 17:00
WHERE: DEF CON, Community Stage

Rayhunter Internals
Rayhunter is an open source project from EFF to detect IMSI catchers. In this follow up to our main stage talk about the project we will take a deep dive into the internals of Rayhunter. We will talk about the architecture of the project, what we have gained by using Rust, porting to other devices, how to jailbreak new devices, the design of our detection heuristics, open source shenanigans, and how we analyze files sent to us.
WHEN: Saturday, August 9, at 12:00
WHERE: DEF CON, Hackers.Town Community Space

Ask EFF at DEF CON 33
We're excited to answer your burning questions on pressing digital rights issues! Our expert panelists will offer brief updates on EFF's work defending your digital rights, before opening the floor for attendees to ask their questions. This dynamic conversation centers challenges DEF CON attendees actually face, and is an opportunity to connect on common causes.
WHEN: Saturday, August 9, at 14:30
WHERE: DEF CON, LVCC - L1 - EHW3 - Track 4

EFF Benefit Poker Tournament at DEF CON 33

The EFF Benefit Poker Tournament is back for DEF CON 33! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Join us at the Planet Hollywood Poker Room as a player or spectator. Play for glory. Play for money. Play for the future of the web. 
WHEN: Friday, August 8, 2025 - 12:00-15:00
WHERE: Planet Hollywood Poker Room, 3667 Las Vegas Blvd South, Las Vegas, NV 89109

Beard and Mustache Contest at DEF CON 33

Yes, it's exactly what it sounds like. Join EFF at the intersection of facial hair and hacker culture. Spectate, heckle, or compete in any of four categories: Full beard, Partial Beard, Moustache  Only, or Freestyle (anything goes so create your own facial apparatus!). Prizes! Donations to EFF! Beard oil! Get the latest updates.
WHEN: Saturday, August 9, 10:00- 12:00
WHERE: DEF CON, Contest Stage (Look for the Moustache Flag)

Tech Trivia Contest at DEF CON 33

Join us for some tech trivia on Saturday, August 9 at 7:00 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Trophy and EFF swag pack. The second and third place teams will also win great EFF gear.
WHEN: Saturday, August 9, 19:00-22:00
WHERE: DEF CON, Contest Stage

Join the Cause!

Come find our table at BSidesLV (Middle Ground), Black Hat USA (back of the Business Hall), and DEF CON (Vendor Hall) to learn more about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 33 shirts available starting Monday at BSidesLV! These shirts have a puzzle incorporated into the design. Snag one online for yourself starting on Tuesday, August 5 if you're not in Vegas!

Join EFF

Support Security & Digital Innovation

Christian Romero

Digital Rights Are Everyone’s Business, and Yours Can Join the Fight!

3 weeks 4 days ago

Companies large and small are doubling down on digital rights, and we’re excited to see more and more of them join EFF. We’re first and always an organization who fights for users, so you might be asking: Why does EFF work with corporate donors, and why do they want to work with us?

SHOW YOUR COMPANY SUPPORTS A BETTER DIGITAL FUTURE

JOIN EFF TODAY

Businesses want to work with EFF for two reasons:

  1. They, their employees, and their customers believe in EFF’s values.
  2. They know that when EFF wins, we all win.

Both customers and employees alike care about working with organizations they know share their values. And issues like data privacy, sketchy uses of surveillance, and free expression are pretty top of mind for people these days. Research shows that today’s working adults take philanthropy seriously, whether they’re giving organizations their money or their time. For younger generations (like the Millennial EFFer writing this blog post!) especially, feeling like a meaningful part of the fight for good adds to a sense of purpose and fulfillment. Given the choice to spend hard-earned cash with techno-authoritarians versus someone willing to take a stand for digital freedom: We’ll take option two, thanks.

When EFF wins, users win. Standing up for the ability to access, use, and build on technology means that a handful of powerful interests won’t have unfair advantages over everyone else. Whether it’s the fight for net neutrality, beating back patent trolls in court, protecting the right to repair and tinker, or pushing for decentralization and interoperability, EFF’s work can build a society that supports creativity and innovation; where established players aren’t allowed to silence the next generation of creators. Simply put: Digital rights are good for business!

The trust of EFF’s membership is based on 35 years of speaking truth to power, whether it’s on Capitol Hill or in Silicon Valley (and let’s be honest, if EFF was Big Tech astroturf, we’d drive nicer cars). EFF will always lead the work and invite supporters to join us, not the other way around. EFF will gratefully thank the companies who join us and offer employees and customers ways to get involved, too. EFF won’t take money from Google, Apple, Meta, Microsoft, Amazon, or Tesla, and we won’t endorse or sponsor a company, service, or product. Most importantly: EFF won’t alter the mission or the message to meet a donor’s wishes, no matter how much they’ve donated.

A few of the ways your team can support EFF:

  1.  Cash donations
  2. Sponsoring an EFF event
  3. Providing an in-kind product or service
  4. Matching your employees’ gifts
  5. Boosting our messaging

Ready to join us in the fight for a better future? Visit eff.org/thanks.

Tierney Hamilton

Data Brokers Are Ignoring Privacy Law. We Deserve Better.

3 weeks 5 days ago

Of the many principles EFF fights for in consumer data privacy legislation, one of the most basic is a right to access the data companies have about you. It’s only fair. So many companies collect information about us without our knowledge or consent. We at least should have a way to find out what they purport to know about our lives.

Yet a recent paper from researchers at the University of Californian-Irvine found that, of 543 data brokers in California’s data broker registry at time of publishing, 43 percent failed to even respond to requests to access data.

43 percent of registered data brokers in California failed to even respond to requests to access data, one study shows.

Let’s stop there for a second. That’s more than four in ten companies from an industry that makes its money from collecting and selling our personal information, ignoring one of our most basic rights under the California Consumer Privacy Act: the right to know what information companies have about us.

Such failures violate the law. If this happens to you, you should file a complaint with the California Privacy Protection Agency (CPPA) and the California Attorney General's Office

This is particularly galling because it’s not easy to file a request in the first place. As these researchers pointed out, there is no streamlined process for these time-consuming requests. People often won’t have the time or energy to see them through. Yet when someone does make the effort to file a request, some companies still feel just fine ignoring the law and their customers completely.

Four in ten data brokers are leaving requesters on read, in violation of the law and our privacy rights. That’s not a passing grade in anyone’s book.

Without consequences to back up our rights, as this research illustrates, many companies will bank on not getting caught, or factor weak slaps on the wrist into the cost of doing business.

This is why EFF fights for bills that have teeth. For example, we demand that people have the right to sue for privacy violations themselves—what’s known as a private right of action. Companies hate this form of enforcement, because it can cost them real money when they flout the law.

When the CCPA started out as a ballot initiative, it had a private right of action, including to enforce access requests. But when the legislature enacted the CCPA (in exchange for the initiative’s proponents removing it from the ballot), corporate interests killed the private right of action in negotiations.

We encourage the California Privacy Protection Agency and the California Attorney General’s Office, which both have the authority to bring these companies to task under the CCPA, to look into these findings. Moving forward, we all have to continue to fight for better laws, to strengthen existing laws, and call on states to enforce the laws on their books to respect everyone’s privacy. Data brokers must face real consequences for brazenly flouting our privacy rights.

Hayley Tsukayama
Checked
33 minutes 56 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed