Sen. Wyden Exposes Data Brokers Selling Location Data to Anti-Abortion Groups That Target Abortion Seekers

16 hours 26 minutes ago

This post was written by Jack Beck, an EFF legal intern

In a recent letter to the FTC and SEC, Sen. Ron Wyden (OR) details new information on data broker Near, which sold the location data of people seeking reproductive healthcare to anti-abortion groups. Near enabled these groups to send targeted ads promoting anti-abortion content to people who had visited Planned Parenthood and similar clinics.

In May 2023, the Wall Street Journal reported that Near was selling location data to anti-abortion groups. Specifically, the Journal found that the Veritas Society, a non-profit established by Wisconsin Right to Life, had hired ad agency Recrue Media. That agency purchased location data from Near and used it to target anti-abortion messaging at people who had sought reproductive healthcare.

The Veritas Society detailed the operation on its website (on a page that was taken down but saved by the Internet Archive) and stated that it delivered over 14 million ads to people who visited reproductive healthcare clinics. These ads appeared on Facebook, Instagram, Snapchat, and other social media for people who had sought reproductive healthcare.

When contacted by Sen. Wyden’s investigative team, Recrue staff admitted that the agency used Near’s website to literally “draw a line” around areas their client wanted them to target. They drew these lines around reproductive health care facilities across the country, using location data purchased from Near to target visitors to 600 Planned Parenthood different locations. Sen. Wyden’s team also confirmed with Near that, until the summer of 2022, no safeguards were in place to protect the data privacy of people visiting sensitive places.

Moreover, as Sen. Wyden explains in his letter, Near was selling data to the government, though it claimed on its website to be doing no such thing. As of October 18, 2023, Sen. Wyden’s investigation found Near was still selling location data harvested from Americans without their informed consent.

Near’s invasion of our privacy shows why Congress and the states must enact privacy-first legislation that limits how corporations collect and monetize our data. We also need privacy statutes that prevent the government from sidestepping the Fourth Amendment by purchasing location information—as Sen. Wyden has proposed. Even the government admits this is a problem.  Furthermore, as Near’s misconduct illustrates, safeguards must be in place that protect people in sensitive locations from being tracked.

This isn’t the first time we’ve seen data brokers sell information that can reveal visits to abortion clinics. We need laws now to strengthen privacy protections for consumers. We thank Sen. Wyden for conducting this investigation. We also commend the FTC’s recent bar on a data broker selling sensitive location data. We hope this represents the start of a longstanding trend.

Adam Schwartz

EFF to D.C. Circuit: The U.S. Government’s Forced Disclosure of Visa Applicants’ Social Media Identifiers Harms Free Speech and Privacy

20 hours ago

Special thanks to legal intern Alissa Johnson, who was the lead author of this post.

EFF recently filed an amicus brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court decision upholding a State Department rule that forces visa applicants to the United States to disclose their social media identifiers as part of the application process. If upheld, the district court ruling has severe implications for free speech and privacy not just for visa applicants, but also the people in their social media networks—millions, if not billions of people, given that the “Disclosure Requirement” applies to 14.7 million visa applicants annually.

Since 2019, visa applicants to the United States have been required to disclose social media identifiers they have used in the last five years to the U.S. government. Two U.S.-based organizations that regularly collaborate with documentary filmmakers around the world sued, challenging the policy on First Amendment and other grounds. A federal judge dismissed the case in August 2023, and plaintiffs filed an appeal, asserting that the district court erred in applying an overly deferential standard of review to plaintiffs’ First Amendment claims, among other arguments.

Our amicus brief lays out the privacy interests that visa applicants have in their public-facing social media profiles, the Disclosure Requirement’s chilling effect on the speech of both applicants and their social media connections, and the features of social media platforms like Facebook, Instagram, and X that reinforce these privacy interests and chilling effects.

Social media paints an alarmingly detailed picture of users’ personal lives, covering far more information that that can be gleaned from a visa application. Although the Disclosure Requirement implicates only “public-facing” social media profiles, registering these profiles still exposes substantial personal information to the U.S. government because of the number of people impacted and the vast amounts of information shared on social media, both intentionally and unintentionally. Moreover, collecting data across social media platforms gives the U.S. government access to a wealth of information that may reveal more in combination than any individual question or post would alone. This risk is even further heightened if government agencies use automated tools to conduct their review—which the State Department has not ruled out and the Department of Homeland Security’s component Customs and Border Protection has already begun doing in its own social media monitoring program. Visa applicants may also unintentionally reveal personal information on their public-facing profiles, either due to difficulties in navigating default privacy setting within or across platforms, or through personal information posted by social media connections rather than the applicants themselves.

The Disclosure Requirement’s infringements on applicants’ privacy are further heightened because visa applicants are subject to social media monitoring not just during the visa vetting process, but even after they arrive in the United States. The policy also allows for public social media information to be stored in government databases for upwards of 100 years and shared with domestic and foreign government entities.  

Because of the Disclosure Requirement’s potential to expose vast amounts of applicants’ personal information, the policy chills First Amendment-protected speech of both the applicant themselves and their social media connections. The Disclosure Requirement allows the government to link pseudonymous accounts to real-world identities, impeding applicants’ ability to exist anonymously in online spaces. In response, a visa applicant might limit their speech, shut down pseudonymous accounts, or disengage from social media altogether. They might disassociate from others for fear that those connections could be offensive to the U.S. government. And their social media connections—including U.S. persons—might limit or sever online connections with friends, family, or colleagues who may be applying for a U.S. visa for fear of being under the government’s watchful eye.  

The Disclosure Requirement hamstrings the ability of visa applicants and their social media connections to freely engage in speech and association online. We hope that the D.C. Circuit reverses the district court’s ruling and remands the case for further proceedings.

Saira Hussain

Podcast Episode: Open Source Beats Authoritarianism

1 day 9 hours ago

What if we thought about democracy as a kind of open-source social technology, in which everyone can see the how and why of policy making, and everyone’s concerns and preferences are elicited in a way that respects each person’s community, dignity, and importance?

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3269fca8-4236-4af6-b482-73e13b643b93%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


   

(You can also find this episode on the Internet Archive and on YouTube.)

This is what Audrey Tang has worked toward as Taiwan’s first Digital Minister, a position the free software programmer has held since 2016. She has taken the best of open source and open culture, and successfully used them to help reform her country’s government. Tang speaks with EFF’s Cindy Cohn and Jason Kelley about how Taiwan has shown that openness not only works but can outshine more authoritarian competition wherein governments often lock up data.

In this episode, you’ll learn about:

  • Using technology including artificial intelligence to help surface our areas of agreement, rather than to identify and exacerbate our differences 
  • The “radical transparency” of recording and making public every meeting in which a government official takes part, to shed light on the policy-making process 
  • How Taiwan worked with civil society to ensure that no privacy and human rights were traded away for public health and safety during the COVID-19 pandemic 
  • Why maintaining credible neutrality from partisan politics and developing strong public and civic digital infrastructure are key to advancing democracy. 

Audrey Tang has served as Taiwan's first Digital Minister since 2016, by which time she already was known for revitalizing the computer languages Perl and Haskell, as well as for building the online spreadsheet system EtherCalc in collaboration with Dan Bricklin. In the public sector, she served on the Taiwan National Development Council’s open data committee and basic education curriculum committee and led the country’s first e-Rulemaking project. In the private sector, she worked as a consultant with Apple on computational linguistics, with Oxford University Press on crowd lexicography, and with Socialtext on social interaction design. In the social sector, she actively contributes to g0v (“gov zero”), a vibrant community focusing on creating tools for the civil society, with the call to “fork the government.”

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

AUDREY TANG
In 2016, October, when I first became Taiwan's digital minister, I had no examples to follow because I was the first digital minister. And then it turns out that in traditional Mandarin, as spoken in Taiwan, digital, shu wei, means the same as “plural” - so more than one. So I'm also a plural minister, I'm minister of plurality. And so to kind of explain this word play, I wrote my job description as a prayer, as a poem. It's very short, so I might as well just quickly recite it. It goes like this:
When we see an internet of things, let's make it an internet of beings.
When we see virtual reality, let's make it a shared reality.
When we see machine learning, let's make it collaborative learning.
When we see user experience, let's make it about human experience.
And whenever we hear that a singularity is near, let us always remember the plurality is here.

CINDY COHN
That's Audrey Tang, the Minister of Digital Affairs for Taiwan. She has taken the best of open source and open culture, and successfully used them to help reform government in her country of Taiwan. When many other cultures and governments have been closing down and locking up data and decision making, Audrey has shown that openness not only works, but it can win against its more authoritarian competition.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN
The idea behind this show is we're trying to make our digital lives better. We spend so much time imagining worst-case scenarios, and jumping into the action when things inevitably do go wrong online but this is a space for optimism and hope.

JASON KELLEY
And our guest this week is one of the most hopeful and optimistic people we've had the pleasure of speaking with on this program. As you heard in the intro, Audrey Tang has an incredibly refreshing approach to technology and policy making.

CINDY COHN
We approach a lot of our conversations on the podcast using Lawrence Lessig’s framework of laws, norms, architecture and markets – and Audrey’s work as the Minister of Digital Affairs for Taiwan combines almost all of those pillars. A lot of the initiatives she worked on have touched on so many of the things that we hold dear here at EFF and we were just thrilled to get a chance to speak with her.
As you'll soon hear, this is a wide-ranging conversation but we wanted to start with the context of Audrey's day-to-day life as Taiwan's Minister of Digital Affairs.

AUDREY TANG
In a nutshell I make sure that every day I checkpoint my work so that everyone in the world knows not just the what of the policies made, but the how and why of policy making.
So for easily more than seven years everything that I did in the process, not the result, of policymaking, is visible to the general public. And that allows for requests, essentially - people who make suggestions on how to steer it into a different direction, instead of waiting until the end of policymaking cycle, where they have to say, you know, we protest, please scratch this and start anew and so on.
No, instead of protesting, we welcome demonstrators that demonstrates better ways to make policies as evidenced during the pandemic, where we rely on the civil society lead contact tracing and counter pandemic methods and for three years we've never had a single day of lockdown.

JASON KELLEY
Something just popped into my head about the pandemic since you mentioned the pandemic. I'm wondering if your role shifted during that time, or if it sort of remained the same except to focus on a slightly different element of the job in some way.

AUDREY TANG
That's a great question. So entering the pandemic, I was the minister with a portfolio in charge of open government, social innovation and youth engagement. And during the pandemic, I assumed a new role, which is the cabinet Chief Information Officer. And so the cabinet CIO usually focuses on, for example, making tax paying easier, or use the same SMS number for all official communications or things like that.
But during the pandemic, I played a role of like a Lagrange Point, right? Between the gravity centers of Privacy protection, social movement on one side and protecting the economy, keep TSMC running on the other side, whereas many countries, I would say everyone other than say Taiwan, New Zealand and a handful of other countries, everyone assumed it would be a trade-off.
Like there's a dial you'll have to, uh, sacrifice some of the human rights, or you have to sacrifice some lives, right? A very difficult choice. We refuse to make such trade-offs.
So as the minister in charge of social innovation, I work with the civil society leaders who themselves are the privacy advocates, to design contact tracing systems instead of relying on Google or Apple or other companies to design those and as cabinet CIO, whenever there is this very good idea, we make sure that we turn it into production, making a national level the next Thursday. So there's this weekly iteration that takes the best idea from the civil society and make it work on a national level. And therefore, it is not just counter pandemic, but also counter infodemic. We've never had a single administrative takedown of speech during the pandemic. Yet we don't have an anti-vax political faction, for example.

JASON KELLEY
That's amazing. I'm hearing already a lot of, uh, things that we might want to look towards in the U.S.

CINDY COHN
Yeah, absolutely. I guess what I'd love to do is, you know, I think you're making manifest a lot of really wonderful ideas in Taiwan. So I'd like you to step back and you know, what does the world look like, you know, if we really embrace openness, we embrace these things, what does the bigger world look like if we go in this direction?

AUDREY TANG
Yeah, I think the main contribution that we made is that the authoritarian regimes for quite a while kept saying that they're more efficient, that for emerging threats, including pandemic, infodemic, AI, climate, whatever, top-down, takedown, lockdown, shutdowns are more effective. And when the world truly embraces democracy, we will be able to pre-bunk – not debunk, pre-bunk – this idea that democracy only leads to chaos and only authoritarianism can be effective. If we do more democracy more openly, then everybody can say, oh, we don't have to make those trade-offs anymore.
So, I think when the whole world embraces this idea of plurality, we'll have much more collaboration and much more diversity. We won't refuse diversity simply because it's difficult to coordinate.

JASON KELLEY
Since you mentioned democracy, I had heard that you have this idea of democracy as a social technology. And I find that really interesting, partly because all the way back in season one, we talked to the chief innovation officer for the state of New Jersey, Beth Noveck, who talked a lot about civic technology and how to facilitate public conversations using technology. So all of that is a lead-in to me asking this very basic question. What does it mean when you say democracy is a social technology?

AUDREY TANG
Yeah. So if you look at democracy as it's currently practiced, you'll see voting, for example, if every four years someone votes for among, say, four presidential candidates, that's just two bits of information uploaded from each individual and the latency is very, very long, right? Four years, two years, one year.
Again, when emerging threats happen, pandemic, infodemic, climate, and so on, uh, they don't work on a four year schedule. They just come now and you have to make something next Thursday, in order to counter it at its origin, right? So, democracy, as currently practiced, suffers from the lack of bandwidth, so the preference of citizens are not fully understood, and latency, which means that the iteration cycle is too long.
And so to think of democracy as a social technology is to think about ways that make the bandwidth wider. To make sure that people's preferences can be elicited in a way that respects each community's dignities, choices, context, instead of compressing everything into this one dimensional poll results.
We can free up the polls so that it become wiki surveys. Everybody can write those polls, questions together. It can become co-creation. People can co-create a constitutional document for the next generation of AI that aligns itself to that document, and so on and so forth. And when we do this, like, literally every day, then also the latency shortens, and people can, like a radar, sense societal risks and come up with societal solutions in the here and now.

CINDY COHN
That's amazing. And I know that you've helped develop some of the actual tools. Or at least help implement them, that do this. And I'm interested in, you know, we've got a lot of technical people in our audience, like how do you build this and what are the values that you put in them? I'm thinking about things like Polis, but I suspect there are others too.

AUDREY TANG
Yes, indeed. Polis is quite well known in that it's a kind of social media that instead of polarizing people to drive so called engagement or addiction or attention, it automatically drives bridge making narratives and statements. So only the ideas that speak to both sides or to multiple sides will gain prominence in Polis.
And then the algorithm surfaces to the top so that people understand, oh, despite our seeming differences that were magnified by mainstream and other antisocial media, there are common grounds, like 10 years ago when UberX first came to Taiwan, both the Uber drivers and taxi drivers and passengers all actually agreed that insurance registration not undercutting existing meters. These are important things.
So instead of arguing about abstract ideas, like whether it's sharing economy, or extractive gig economy, uh, we focus, again, on the here and now and settle the ideas in a way that's called rough consensus. Meaning that everybody, maybe not perfectly, live with it, can live with it.

CINDY COHN
I just think they're wonderful and I love the flipping of this idea of algorithmic decision making such that the algorithm is surfacing places of agreement, and I think it also does some mapping as well about places of agreement instead of kind of surfacing the disagreement, right?
And that, that is really, algorithms can be programmed in either direction. And the thinking about how do you build something that brings stuff together to me is just, it's fascinating and doubly interesting because you've actually used it in the Uber example, and I think you've used some version of that also back in the early work with the Sunflower movement as well.

AUDREY TANG
Yeah, the Uber case was 2015, and the Sunflower Movement was, uh, 2014, and at 2014, the Ma Ying-jeou administration at the time, um, had a approval rate for citizens of less than 10%, which means that anything the administration says, the citizens ultimately don't believe, right? And so instead of relying on traditional partisan politics, which totally broke down circa 2014, Ma Ying-jeou worked with people that came from the tech communities and named, uh, Simon Chang from Google, first as vice premier and then as premier. And then in 2016, when the Tsai Ing Wen administration began again, the premier Lin Chuan was also independent. So we are after 2014-15, at a new phase of our democracy where it becomes normal for me to say, Oh, I don't belong to any parties but I work with all the parties. That credible neutrality, this kind of bridge making across parties, becomes something people expect the administration to do. And again, we don't see that much of this kind of bridge making action in other advanced democracies.

CINDY COHN
You know, I had this question and, and I know that one of our supporters did as well, which is, what's your view on, you know, kind of hackers? And, and by saying hackers here, I mean people with deep technical understanding. Do you think that they can have more impact by going into government than staying in private industry? Or how do you think about that? Because obviously you made some decisions around that as well.

AUDREY TANG
So my job description basically implies that I'm not working for the government. I'm just working with the government. And not for the people, but with the people. And this is very much in line with the internet governance technical community, right? The technical community within the internet governance communities kind of places ourselves as a hub between the public sector, the private sector, even the civil society, right?
So, the dot net suffix is something else. It is something that includes dot org, dot com, dot edu, dot gov, and even dot military, together into a shared fabric so that people can find rough consensus. And running code, regardless of which sector they come from. And I think this is the main gift that the hacker community gives to modern democracy, is that we can work on the process, but the process or the mechanism naturally fosters collaboration.

CINDY COHN
Obviously whenever you can toss rough consensus and running code into a conversation, you've got our attention at EFF because I think you're right. And, and I think that the thing that we've struggled with is how to do this at scale.
And I think the thing that's so exciting about the work that you're doing is that you really are doing a version of. transparency, rough consensus, running code, and finding commonalities at a scale that I would say many people weren't sure was possible. And that's what's so exciting about what you've been able to build.

JASON KELLEY
I know that before you joined with the government, you were a civic hacker involved in something called gov zero. And I'm wondering, maybe you can talk a little bit about that and also help people who are listening to this podcast think about ways that they can sort of follow your path. Not necessarily everyone can join the government to do these sorts of things, but I think people would love to implement some of these ideas and know more about how they could get to the position to do so.

AUDREY TANG
Collaborative diversity works not just in the dot gov, but if you're working in a large enough dot org or dot com, it all works the same, right? When I first discovered the World Wide Web, I learned about image tags, and the first image tag that I put was the Blue Ribbon campaign. And it was actually about unifying the concerns of not just librarians, but also the hosting companies and really everybody, right, regardless of their suffix. We saw their webpages turning black and there's this prominent blue ribbon at a center. So by making the movement fashionable across sectors, you don't have to work in the government in order to make a change. Just open source your code and somebody In the administration, that's also a civic hacker will notice and just adapt or fork, or merge your code back.
And that's exactly how Gov Zero works. In 2012 a bunch of civic hackers decided that they've had enough with PDF files that are just image scans of budget descriptions, or things like that, which makes it almost impossible for average citizens to understand what's going on with the Ma Ying-jeou administration.And so, they set up forked websites.
So for each website, something dot gov dot tw, the civic hackers register something dot g0v dot tw, which looks almost the same. So, you visit a regular government website, you change your O to a zero, and this domain hack ensures that you're looking at a shadow government versions of the same website, except it's on GitHub, except it’s powered by open data, except there's real interactions going on and you can actually have a conversation about any budget item around this visualization with your fellow civic hackers.
And many of those projects in Gov Zero became so popular that the administration, the ministries finally merged back their code so that if you go to the official government website, it looks exactly the same as the civic hacker version.

CINDY COHN
Wow. That is just fabulous. And for those who might be a little younger, the Blue Ribbon Campaign was an early EFF campaign where websites across the internet would put a blue ribbon up to demonstrate their commitment to free speech. And so I adore that that was one of the inspirations for the kind of work that you're doing now. And I love hearing these recent examples as well, that this is something that really you can do over and over again.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

TIME magazine recently featured Audrey Tang as one of the 100 most influential people in AI and one of the projects they mentioned is Alignment Assemblies, a collaboration with the Collective Intelligence Project policy organization that employs a chatbot to help enable citizens to weigh in on their concerns around AI and the role it should play.

AUDREY TANG
So it started as just a Polis survey of the leaders at the Summit for Democracy and AI labs and so on on how exactly are their concerns bridge-worthy when it comes to the three main values identified by the Collective Intelligence Project, which is participation, progress and safety. Because at the time, the conversation because of the GPT4 and its effect on everybody's mind, we hear a lot of strong trade-off arguments like to maximize safety, we have to, I don't know, restrict GPU Purchasing across the world to put a cap on progress or we hear that for to make open source possible we must give up the idea of the AI's aligning themselves, but actually having the uncensored model be like personal assistant so that everybody has one so that people become inoculated against deepfakes because everybody can very easily deepfake and so on.
And we also hear that maybe internet communication will be taken over by deepfakes. And so we will have to reintroduce some sort of real name internet because otherwise everybody will be a bot on the internet and so on. So all these ideas really push over the window, right? Because before generative AI, these ideas were considered fringe.
And suddenly, at the end of March this year, those ideas again gained prominent ground. So using Polis and using TalkToTheCity and other tools, we quickly mapped an actually overlapping consensus. So regardless of which value you come from, people generally understand that if we don't tackle the short term risks - the interactive deepfakes, the persuasion and addiction risks, and so on - then we won't even coordinate enough to live together to see the coordination around the extinction risks a decade or so down the line, right?
So we have to focus on the immediate risks first, and that led to the safe dot ai joint statement, which I signed, and also the Mozilla open and safety joint statement which I signed and so on.
So the bridge-making AI actually enabled a sort of deep canvassing where I can take all the sides and then make the narratives that bridges the three very different concerns. So it's not a trilemma, but rather reinforcing each other mutually. And so in Taiwan, a surprising consensus that we got from the Polis conversations and the two face-to-face day-long workshops, was that people in Taiwan want the Taiwanese government to pioneer this use of trustworthy AI.
So instead of the private sector producing the first experiences, they want the public servants to exercise their caution of course, but also to use gen AI in the public service. But with one caveat that this must be public code, that is to say, it should be free software, open source, the way it integrates into decision making should be an assistive role and everything need to be meticulously documented so the civil society can replicate it on their own personal computers and so on. And I think that's quite insightful. And therefore, we're actually doubling down on the societal evaluation and certification. And we're setting up a center for that at the end of this year.

CINDY COHN
So what are some of the lessons and things that you've learned in doing this in Taiwan that you think, you know, countries around the world or people around the world ought to take back and, and think about how they might implement it?
Are there pitfalls that you might want to avoid? Are there things that you think really worked well that people ought to double down on?

AUDREY TANG
I think it boils down to two main observations. The first one is that credible neutrality and alignment with the career public service is very, very important. The political parties come and go, but a career public service is very aligned with the civic hackers' kind of thinking because they maintain the mechanism.
They want the infrastructure to work and they want to serve people who belong to different political party. It doesn't matter because that's what a public service does. It serves the public. And so for the first few years of the Gov Zero movement the projects found not just natural allies in the Korean public service, but also the credibly neutral institutions in our society.
For example, our National Academy which doesn't report to the ministers, but rather directly to the president is widely seen as credibly neutral. And so civil society organizations can play such a role equally effectively if they work directly with the people, not just for the policy think tanks and so on.
So one good example may be like consumer report in the U. S. or the National Public Radio, and so on. So, basically, these are the mediators that are very similar to us, the civic hackers, and we need to find allies in them. So this is the first observation. And the second observation is that you can turn any crisis that urgently need clarity into an opportunity to future mechanisms that works better.
So if you have the civil society trust in it and the best way to win trust is to give trust. So by simply saying the opposition party, everyone has the real time API of the open data, and so if you make a critique of our policy, well, you have the same data as we do. So patches welcome, send us pull requests, and so on. This turns what used to be a zero sum or negative sum dynamic in politics thanks to a emergency like pandemic or infodemic and turned it into a co-creation opportunity and the resulting infrastructure become so legitimate that no political parties will dismantle it. So it become another part of political institution.
So having this idea of digital public infrastructure and ask for the parliament to give it infrastructure, money and investment, just like building parks and roads and highways. This is also super important.
So when you have a competent society, when we focus on not just the literacy, but competence of everyday citizens, they can contribute to public infrastructures through civic infrastructures. So credible neutrality on one and public and civic infrastructure as the other, I think these two are the most fundamental, but also easiest to practice way to introduce this plurality idea to other polities.

CINDY COHN
Oh, I think these are great ideas. And it reminds me a little of what we learned when we started doing electronic voting work at EFF. We learned that we needed to really partner with the people who run elections.
We were aligned that all of us really wanted to make sure that the person with the most votes was actually the person who won the election. But we started out a little adversarial and we really had to learn to flip that around. Now that’s something that our friends at Verified Voting have really figured out and have build some strong partnerships. But I suspect in your case it could have been a little annoying to officials that you were creating these shadow websites. I wonder, did it take a little bit of a conversation to flip them around to the situation in which they embraced it?

AUDREY TANG
I think the main intervention that I personally did back in the days when I run the MoEdDict, or the Ministry of Education Dictionary project, in the Gov Zero movement, was that we very prominently say, that although we reuse all the so-called copyright reserve data from the Ministry of Education, we relinquish all our copyright under the then very new Creative Commons 0, so that they cannot say that we're stealing any of the work because obviously we're giving everything back to the public.
So by serving the public in an even more prominent way than the public service, we make ourselves not just the natural allies, but kind of reverse mentors of the young people who work with cabinet ministers. But because we serve the public better in some way, they can just take entire website design, the entire Unicode, interoperability, standard conformance, accessibility and so on and simply tell their vendors, and say, you know, you can merge it. You don't have to pay these folks a dime. And naturally then the service increases and they get praise from the press and so on. And that fuels this virtuous cycle of collaboration.

JASON KELLEY
One thing that you mentioned at the beginning of our conversation that I would love to hear more about is the idea of radical transparency. Can you talk about how that shows up in your workflow in practice every day? Like, do you wake up and have a cabinet meeting and record it and transcribe it and upload it? How do you find time to do all that? What is the actual process?

AUDREY TANG
Oh I have staff of course. And also, nowadays, language models. So the proofreading language models are very helpful. And I actually train my own language models. Because the pre-training of all the leading large language models already read from the seven years or so of public transcript that I published.
So they actually know a lot about me. In fact, when facilitating the chatbot conversations, one of the more powerful prompts we discovered was simply, facilitate this conversation in the manner of Audrey Tang. And then language model actually know what to do because they've seen so many facilitative transcripts.

CINDY COHN
Nice! I may start doing that!

AUDREY TANG
It's a very useful elicitation prompt. And so I train my local language model. My emails, especially English ones, are all drafted by the local model. And it has no privacy concern because it runs in airplane mode. The entire fine tuning inference. Everything is done locally and so while it does learn from my emails and so on, I always read fully before hitting send.
But this language model integration of personal computing already saved, I would say 90 percent of my time, during daily chores, like proofreading, checking transcripts, replying to emails and things like that. And so I think one of the main arguments we make in the cabinet is that this kind of use of what we call local AI, edge AI, or community open AI, are actually better to discover the vulnerabilities and flaws and so on, because then the public service has a duty to ensure the accuracy and what better way to ensure accuracy of language model systems than integrating it in the flow of work in a way that doesn't compromise privacy and personal data protection. And so, yeah, AI is a great time saver, and we're also aligning AI as we go.
So for the other ministries that want to learn from this radical transparency mechanism and so on, we almost always sell it as a more secure and time saving device. And then once they adopt it, then they see the usefulness of getting more public input and having a language model to digest the collective inputs and respond to the people in the here and now.

CINDY COHN
Oh, that is just wonderful because I do know that when you start talking with public servants about more public participation, often what you get is, Oh, you're making my job harder. Right? You're making more work for me. And, and what you've done is you've kind of been able to use technology in a way that actually makes their job easier. And I think the other thing I just want to lift up in what you said, is how important it is that these AI systems that you're using are serving you. And it's one of the things we talk about a lot about the dangers of AI systems, which is, who bears the downside if the AI is wrong?
And when you're using a service that is air gapped from the rest of the internet and it is largely using to serve you in what you're doing, then the downside of it being wrong doesn't go on, you know, the person who doesn't get bail. It's on you and you're in the best position to correct it and actually recognize that there's a problem and make it better.

AUDREY TANG
Exactly. Yeah. So I call these AI systems assistive intelligence, after assistive technology because it empowers the dignity of me, right? I have this assistive tech, which is a bunch of eyeglasses. It's very transparent, and if I see things wrong after putting those eyeglasses, nobody blamed the eyeglasses.
It's always the person that is empowered by the eyeglasses. But if instead I wear not eyeglasses, but those VR devices that consumes all the photons, upload it to the cloud for some very large corporation to calculate and then project back to my eyes and maybe with some advertisement in it and so on, then it's very hard to tell whether the decision making falls on me or on those intermediaries that basically blocks my eyesight and just present me a alternate reality. So I always prefer things that are like eyeglasses, or bicycles for that matter that someone can repair it themselves, without violating an NDA or paying $3 million in license fees.

CINDY COHN
That's great. And open source for the win again there. Yeah.

AUDREY TANG
Definitely.

CINDY COHN
Yeah, well thank you so much, Audrey. I tell you, this has been kind of like a breath of fresh air, I think, and I really appreciate you giving us a glimpse into a world in which, you know, the values that I think we all agree on are actually being implemented and implementing, as you said, in a way that scales and makes things better for ordinary people.

AUDREY TANG
Yes, definitely. I really enjoy the questions as well. Thank you so much. Live long and prosper.

JASON KELLEY
Wow. A lot of the time we talk to folks and it's hard to get to a vision of the future that we feel positive about. And this was the exact opposite. I have rarely felt more positively about the options for the future and how we can use technology to improve things and this was just - what an amazing conversation. What did you think, Cindy?

CINDY COHN
Oh I agree. And the thing that I love about it is, she’s not just positing about the future. You know, she’s telling us stories that are 10 years old about how they fix things in Taiwan. You know, the Uber story and some of the other stories of the Sunflower movement. She didn't just, like, show up and say the future's going to be great, like, she's not just dreaming, They're doing.

JASON KELLEY
Yeah. And that really stood out to me when talking about some of the things that I expected to get more theoretical answers to, like, what do you mean when you say democracy is a technology and the answer is quite literally that democracy suffers from a lack of bandwidth and latency and the way that it takes time for individuals to communicate with the government can be increased in the same way that we can increase bandwidth and it was just such a concrete way of thinking about it.
And another concrete example was, you know, how do you get involved in something like this? And she said, well, we just basically forked the website of the government with a slightly different domain and put up better information until the government was like, okay, fine, we'll just incorporate it. These are such concrete things that people can sort of understand about this. It's really amazing.

CINDY COHN
Yeah, the other thing I really liked was pointing out how, you know, making government better and work for people is really one of the ways that we counter authoritarianism. She said one of the arguments in favor of authoritarianism is that it's more efficient, and it can get things done faster than a messy, chaotic, democratic process.
And she said, well, you know, we just fixed that so that we created systems in which democracy was more efficient. than authoritarianism. And she talked a lot about the experience they had during COVID. And the result of that being that they didn't have a huge misinformation problem or a huge anti-vax community in Taiwan because the government worked.

JASON KELLEY
Yeah that's absolutely right, and it's so refreshing to see that, that there are models that we can look toward also, right? I mean, it feels like we're constantly sort of getting things wrong, and this was just such a great way to say, Oh, here's something we can actually do that will make things better in this country or in other countries,
Another point that was really concrete was the technology that is a way of twisting algorithms around instead of surfacing disagreements, surfacing agreements. The Polis idea and ways that we can make technology work for us. There was a phrase that she used which is thinking of algorithms and other technologies as assistive. And I thought that was really brilliant. What did you think about that?

CINDY COHN
I really agree. I think that, you know, building systems that can surface agreement as opposed to doubling down on disagreement seems like so obvious in retrospect and this open source technology, Polis has been doing it for a while, but I think that we really do need to think about how do we build systems that help us build towards agreement and a shared view of how our society should be as opposed to feeding polarization. I think this is a problem on everyone's mind.
And, when we go back to Larry Lessig's four pillars, here's actually a technological way to surface agreement. Now, I think Audrey's using all of the pillars. She's using law for sure. She's using norms for sure, because they're creating a shared norm around higher bandwidth democracy.
But really you know in her heart, you can tell she's a hacker, right? She's using technologies to try to build this, this shared world and, and it just warms my heart. It's really cool to see this approach and of course, radical openness as part of it all being applied in a governmental context in a way that really is working far better than I think a lot of people believe could be possible.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.
We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode you heard reCreation by airtone, Kalte Ohren by Alex featuring starfrosch and Jerry Spoon, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.
You can find links to their music in our episode notes, or on our website at eff.org/podcast.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
I hope you’ll join us again soon. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Josh Richman

EFF Statement on Nevada's Attack on End-to-End Encryption

1 day 21 hours ago

EFF learned last week that the state of Nevada is seeking an emergency order prohibiting Meta from rolling out end-to-end encryption in Facebook Messenger for all users in the state under the age of 18. The motion for a temporary restraining order is part of a lawsuit by the state Attorney General alleging that Meta’s products are deceptively designed to keep users addicted to the platform. While we regularly fight legal attempts to limit social media access, which are primarily based on murky evidence of its effects on different groups, blocking minors’ use of end-to-end encryption would be entirely counterproductive and just plain wrong.

Encryption is the most vital means we have to protect privacy, which is especially important for young people online. Yet in the name of protecting children, Nevada seems to be arguing that merely offering encryption on a social media platform that Meta knows has been used by criminals is itself illegal. This cannot be the law; in practice it would let the state prohibit all platforms from offering encryption, and such a ruling would raise serious constitutional concerns. Lawsuits like this also demonstrate the risks posed by bills like EARN IT and Stop CSAM that are now pending before Congress: state governments already are trying to eliminate encryption for all of us, and these dangerous bills would give them even more tools to do so.

EFF plans to speak up for users in the Nevada proceeding and fight this misguided effort to prohibit encryption.  Stay tuned.

Andrew Crocker

EFF Urges Ninth Circuit to Reinstate X’s Legal Challenge to Unconstitutional California Content Moderation Law

4 days 20 hours ago

The Electronic Frontier Foundation (EFF) urged a federal appeals court to reinstate X’s lawsuit challenging a California law that forces social media companies to file reports to the state about their content moderation decisions, and with respect to five controversial issues in particular—an unconstitutional intrusion into platforms’ right to curate hosted speech free of government interference.

While we are enthusiastic proponents of transparency and have worked, through the Santa Clara Principles and otherwise, to encourage online platforms to provide information to their users, we see the clear threat in the state mandates. Indeed, the Santa Clara Principles itself warns against government’s use of its voluntary standards as mandates. California’s law is especially concerning since it appears aimed at coercing social media platforms to more actively moderate user posts.

In a brief filed with the U.S. Court of Appeals for the Ninth Circuit, we asserted—as we have repeatedly in the face of state mandates around the country about what speech social media companies can and cannot host—that allowing California to interject itself into platforms’ editorial processes, in any form, raises serious First Amendment concerns.

At issue is California A.B. 587, a 2022 law requiring large social media companies to semiannually report to the state attorney general detailed information about the content moderation decisions they make and, in particular, with respect to hot button issues like hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference.

A.B. 587 requires companies to report “detailed descriptions” of its content moderation practices generally and for each of these categories, and also to report detailed information about all posts flagged as belonging to any of those categories, including how content in these categories is defined, how it was flagged, how it was moderated, and whether their action was appealed. Companies can be fined up to $15,000 a day for failing to comply.

X, the social media company formerly known as Twitter, sued to overturn the law, claiming correctly that it violates its First Amendment right against being compelled to speak. A federal judge declined to put the law on temporary hold and dismissed the lawsuit.

We agree with Twitter and urge the Ninth Circuit to reverse the lower court. The law was intended to be and is operating as an informal censorship scheme to pressure online intermediaries to moderate user speech, which the First Amendment does not allow.

It’s akin to requiring a state attorney general or law enforcement to be able to listen in on editorial board meetings at the local newspaper or TV station, a clear interference with editorial freedom. The Supreme Court has consistently upheld this general principle of editorial freedom in a variety of speech contexts. There shouldn’t be a different rule for social media.

From a legal perspective, the issue before the court is what degree of First Amendment scrutiny is used to analyze the law. The district court found that the law need only be justified and not burdensome to comply with, a low degree of analysis known as Zauderer scrutiny, that is reserved for compelled factual and noncontroversial commercial speech. In our brief, we urge that as a law that both intrudes upon editorial freedom and disfavors certain categories of speech it must survive the far more rigorous strict First Amendment scrutiny. Our brief sets out several reasons why strict scrutiny should be applied.

Our brief also distinguishes A.B. 587’s speech compulsions from ones that do not touch the editorial process such as requirements that companies disclose how they handle user data. Such laws are typically subject to an intermediate level of scrutiny, and EFF strongly supports such laws that can pass this test.

A.B. 587 says X and other social media companies must report to the California Attorney General whether and how it curates disfavored and controversial speech and then adhere to those statements, or face fines. As a practical matter, this requirement is unworkable—content moderation policies are highly subjective, constantly evolving, and subject to numerous influences.

And as a matter of law, A.B. 587 interferes with platforms’ constitutional right to decide whether, how, when, and in what way to moderate controversial speech. The law is a thinly veiled attempt to coerce sites to remove content the government doesn’t like.

We hope the Ninth Circuit agrees that’s not allowed under the First Amendment.

David Greene

EFF Opposes California Initiative That Would Cause Mass Censorship

4 days 23 hours ago

In recent years, lots of proposed laws purport to reduce “harmful” content on the internet, especially for kids. Some have good intentions. But the fact is, we can’t censor our way to a healthier internet.

When it comes to online (or offline) content, people simply don’t agree about what’s harmful. And people make mistakes, even in content moderation systems that have extensive human review and appropriate appeals. The systems get worse when automated filters are brought into the mix–as increasingly occurs, when moderating content at the vast scale of the internet.

Recently, EFF weighed in against an especially vague and poorly written proposal: California Ballot Initiative 23-0035, written by Common Sense Media. It would allow for plaintiffs to sue online information providers for damages of up to $1 million if it violates “its responsibility of ordinary care and skill to a child.”

We sent a public comment to California Attorney General Rob Bonta regarding the dangers of this wrongheaded proposal. While the AG’s office does not typically take action for or against ballot initiatives at this stage of the process, we wanted to register our opposition to the initiative as early as we could.

Initiative 23-0035  would result in broad censorship via a flood of lawsuits claiming that all manner of content online is harmful to a single child. While it is possible for children (and adults) to be harmed online, Initiative 23-0035’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults. Many online platforms will censor user content in order to avoid this legal risk.

The following are just a few of the many areas of culture, politics, and life where people have different views of what is “harmful,” and where this ballot initiative thus could cause removal of online content:

  • Discussions about LGBTQ life, culture, and health care.
  • Discussions about dangerous sports like tackle football, e-bikes, or sport shooting.
  • Discussions about substance abuse, depression, or anxiety, including conversations among people seeking treatment and recovery.

In addition, the proposed initiative would lead to mandatory age verification. It’s wrong to force someone to show ID before they go online to search for information. It eliminates the right to speak or to find information anonymously, for both minors and adults.

This initiative, with its vague language, is arguably worse than the misnamed Kids Online Safety Act, a federal censorship bill that we are opposing. We hope the sponsors of this initiative choose not to move forward with this wrongheaded and unconstitutional proposal. If they do, we are prepared to oppose it.

You can read EFF’s full letter to A.G. Bonta here.

Joe Mullin

As India Prepares for Elections, Government Silences Critics on X with Executive Order

5 days 5 hours ago

It is troubling to see that the Indian government has issued new demands to X (formerly Twitter) to remove accounts and posts critical of the government and its recent actions. This is especially bears watching as India is preparing for general elections this spring, and concerns for the government’s manipulation of social media critical of it grows.

On Wednesday, X’s Global Government Affairs account (@GlobalAffairs) tweeted:

The Indian government has issued executive orders requiring X to act on specific accounts and posts, subject to potential penalties including significant fines and imprisonment. 

In compliance with the orders, we will withhold these accounts and posts in India alone; however, we disagree with these actions and maintain that freedom of expression should extend to these posts.

Consistent with our position, a writ appeal challenging the Indian government's blocking orders remains pending. We have also provided the impacted users with notice of these actions in accordance with our policies.

Due to legal restrictions, we are unable to publish the executive orders, but we believe that making them public is essential for transparency. This lack of disclosure can lead to a lack of accountability and arbitrary decision-making.

India’s general elections are set to take place in April or May and will elect 543 members of the Lok Sabha, the lower house of the country’s parliament. Since February, farm unions in the country have been striking for floor pricing (also known as a minimum support price) for their crops. While protesters have attempted to march to Delhi from neighboring states, authorities have reportedly barricaded city borders, and two neighboring states ruled by the governing Bharatiya Janata Party (BJP) have deployed troops in order to stop the farmers from reaching the capital.

According to reports, the accounts locally withheld by X in response to the Indian government’s orders are critical of the BJP, while some accounts that were supporting or merely covering the farmer’s protests have also been withheld. Several account holders have identified themselves as being among those notified by X, while other users have identified many other accounts.

This isn’t the first time that the Indian government has gone after X users. In 2021, when the company—then called Twitter—was under different leadership, it suspended 500 accounts, then first reversed its decision, citing freedom of speech, and later re-suspended the accounts, citing compliance with India’s Information Technology Act. And in 2023, the company withheld 120 accounts critical of the BJP and Prime Minister Narendra Modi.

This is exactly the type of censorship we feared when EFF previously criticized the ITA’s rules, enacted in 2021, that force online intermediaries to comply with strict removal time frames under government orders. The rules require online intermediaries like X to remove restricted posts within 36 hours of receiving notice. X can challenge the order—as they have indicated they intend to—but the posts will remain down until that challenge is fully adjudicated.

EFF is also currently fighting back against efforts related to an Indian court order that required Reuters news service to de-publish one of its articles while a legal challenge to it is considered by the courts. This type of interim censorship is unauthorized in most legal systems. Those involved in the case have falsely represented to others who wrote about the Reuters story that the order applied to them as well.

Jillian C. York

Is the Justice Department Even Following Its Own Policy in Cybercrime Prosecution of a Journalist?

5 days 16 hours ago

Following an FBI raid of his home last year, the freelance journalist Tim Burke has been arrested and indicted in connection with an investigation into leaks of unaired footage from Fox News. The raid raised questions about whether Burke was being investigated for First Amendment-protected journalistic activities, and EFF joined a letter calling on the Justice Department to explain whether and how it believed Burke had actually engaged in wrongdoing. Although the government has now charged Burke, these questions remain, including whether the prosecution is consistent with the DOJ’s much-vaunted policy for charging criminal violations of the Computer Fraud and Abuse Act (CFAA).

The indictment centers on actions by Burke and an alleged co-conspirator to access two servers belonging to a sports network and a television livestreaming service respectively. In both cases, Burke is alleged to have used login credentials that he was not authorized to use, making the access “without authorization” under the CFAA. In the case of the livestream server, he is also alleged to have downloaded a list of unique, but publicly available URLs corresponding to individual news networks’ camera feeds and copied content from the streams, in further violation of the CFAA and the Wiretap Act. However, in a filing last year seeking the return of devices seized by the FBI, Burke’s lawyers argued that the credentials he used to access the livestream server were part of a “demo” publicly posted by the owner of the service, and therefore his use was not “unauthorized.”

Unfortunately, concepts of authorization and unauthorized access in the CFAA are exceedingly murky. EFF has fought for years—with some success—to bring the CFAA in line with common sense notions of what an anti-hacking law should prohibit: actually breaking into private computers. But the law remains vague, too often allowing prosecutors and private parties to claim that individuals knew or should have known what they were doing was unauthorized, even when no technical barrier prevented them from accessing a server or website.

The law’s vagueness is so apparent that in the wake of Van Buren v. United States, a landmark Supreme Court ruling overturning a CFAA prosecution, even the Justice Department committed to limiting its discretion in prosecuting computer crimes. EFF felt that these guidelines could have gone further, but we held out hope that they would do some work in protecting people from overbroad use of the CFAA.

Mr. Burke’s prosecution shows the DOJ needs to do more to show that its charging policy prevents CFAA misuse. Under the guidelines, the department has committed to bringing CFAA charges only in specific instances that meet all of the following criteria:

  • the defendant’s access was not authorized “under any circumstances”
  • the defendant knew of the facts that made the access without authorization
  • the prosecution serves “goals for CFAA enforcement”

If Mr. Burke merely used publicly available demo credentials to access a list of public livestreams which were themselves accessible without a username or password, the DOJ would be hard-pressed to show that the access was unauthorized under any circumstances and he actually knew that.

This is only one of the concerning aspects of the Burke indictment. In recent years, there have been several high-profile incidents involving journalists accused of committing computer crimes in the course of their reporting on publicly available material. As EFF argued in an amicus brief in one of these cases, vague and overbroad applications of computer crime laws threaten to chill a wide range of First Amendment protected activities, including reporting on matters of public interest. We’d like to see these laws—state and federal—be narrowed to better reflect how people use the Internet and to remove the ability of prosecutors to bring charges where the underlying conduct is nothing more than reporting on publicly available material.

Related Cases: Van Buren v. United States
Andrew Crocker

NSA Spying Shirts Are Back Just In Time to Tell Congress to Reform Section 702

5 days 22 hours ago

We’ve been challenging the National Security Agency's mass surveillance of ordinary people since we first became aware of it nearly twenty years ago. Since then, tens of thousands of supporters have joined the call to fight what became Section 702 of the FISA Amendments Act, a law which was supposed to enable overseas surveillance of specific targets, but has become a backdoor way of mass spying on the communications of people in the U.S. Now, Section 702 is back up for a major renewal since it was last approved in 2018, and we need to pull out all the stops to make sure it is not renewed without massive reforms and increased transparency and oversight. 

Section 702 is up for renewal, so we decided our shirts should reflect the ongoing fight. For the first time in a decade, our popular NSA Spying shirts are back, with an updated EFF logo and design. The image of the NSA's glowering, red-eyed eagle using his talons to tap into your data depicts the collaboration of telecommunication companies with the NSA - a reference to our Hepting v. AT&T and Jewel v. NSA warrantless wiretapping cases. Every purchase helps EFF’s lawyers and activists stop the spying and unplug big brother.

Get your shirt in our shop today

Wear this t-shirt to proudly let everyone know that it’s time to reign in mass surveillance. And if you haven’t yet, let your representatives know today to Stop the Spying. 

EFF is a member-supported nonprofit and we value your contributions deeply. Financial support from people like you has allowed EFF to educate the public, reach out to lawmakers, organize grassroots action, and challenge threats to digital freedom at every turn.  Join the cause now to fight government secrecy and end illegal surveillance!

EFF is a U.S. 501(c)(3) organization and donations are tax deductible to the full extent provided by law.

Jason Kelley

Unregulated, Exploitative, and on the Rise: Vera Institute's Report on Electronic Monitoring

6 days 20 hours ago

Incarceration rates in the United States have long been among the highest in the world, and in response to the systemic flaws and biases unveiled by the renewed scrutiny of the criminal legal system, many advocates have championed new policies aimed at reducing sentences and improving conditions in prisons. Some have touted the use of electronic monitoring (EM) as an alternative fix to ensure that people whose cases have yet to be adjudicated are not physically detained. Unsurprisingly, those most often making these claims are the for-profit firms offering EM technology and the governmental agencies they contract with, and there is little data to back them up. In a new report, the Vera Institute of Justice provides the most detailed data yet showing that these claims don’t match reality, and outlines a number of issues with how EM is administered across the country.

Another Private Sector Wild West

According to interviews and an analysis of policies across hundreds of jurisdictions, the Vera Institute found that the use of EM was an unregulated patchwork across counties, states, and the federal government. As private firms market new products, the level of testing and quality assurance has failed to keep up with the drive to get contracts with local and state law enforcement agencies. Relying on technology produced by such a disordered industry can lead to reincarceration due to faulty equipment, significantly increased surveillance on those being monitored and their household, and onerous requirements for people under EM than when dealing with probation or parole officers.

The lack of correlation between EM and decarceration and the advancement in EM technology suggests that EM, rather than serving as an alternative to detention, is merely another tool in the government's arsenal of carceral control. 

Even the question of jurisdictional authority is a mess. The Vera Institute explains that agencies frequently rely on private firms that further subcontract out the hardware or software, and individuals in rural areas can create profitable businesses for themselves that only serve as a middleman between the criminal justice system and the hardware and software vendors. The Vera Institute suggests that this can lead to corruption, including the extortion by these small subcontractors of people held on EM, often with no oversight or public sector transparency. That presents a problem to the data collection, public records requests, and other investigative work that policymakers, advocates, and journalists rely on to find the truth and inform policy.

Further, the costs of EM are frequently passed on to the people forced to use it, sometimes regardless of if they have the means to pay, whether the EM is an obstacle to their employment, or whether they are under monitoring pre-trial (where presumption of innocence should apply) or post-sentencing (after a guilty verdict). And these costs don’t necessarily buy them greater “liberty,” as many forms of hardware or app-based software increased around-the-clock surveillance at the hands of private firms, once again with little to no oversight or ability to access data through public records requests.

ICE doubles down on electronic monitoring

According to the Vera Institute’s estimates, from 2017 onwards the single largest user of EM in the United States has been Immigration and Customs Enforcement (ICE) as part of its Alternative To Detention (ATD) programs. And in the last few years, that usage has skyrocketed: Vera’s report states that between 2021 and 2022, the number of adults under ICE's EM program more than tripled, from 103,900 to 360,000.

For those currently under ICE’s EM surveillance, their experience is primarily dictated by a single company: BI Incorporated, from whom ICE has purchased all its EM infrastructure since 2004. While BI’s offerings have recently shifted away from the GPS-enabled ankle monitors known to shock and cut their users towards smartphone apps and smartwatches, a 2022 investigation from The Guardian revealed that monitored people experience a lack of technical support from BI, frequent bugs that can prevent them from complying with mandatory check-ins, and few protocols for how their issues are handled.

On top of all of these issues, a 2022 joint investigation led by Just Futures Law claims that ICE and BI’s policies for collecting and retaining people’s sensitive data are overbroad and self-contradictory. The uncovered documents showed vast amounts of extremely private information (including biometrics, location data, data about people’s contacts and communities, and more) were collected and potentially retained by ICE for up to 75 years. One document (p. 123) revealed that data collected by ATD programs can be used for mass arrests, as in the case of a Manassas, Virginia office sharing geolocation data with ICE to arrest 40 people.

[...] despite ICE’s use of EM being dubbed an “alternative to detention” (ATD), the rise of ATD program budgets has not coincided with a decrease in detention. Meanwhile, the programs have historically been used on “individuals who have been released from detention or who were never detained in the first place,” meaning they affect those who would otherwise be free from physical detention.

Given that the average individual will spend 558.5 days in an ATD program, this gives ICE access to a dizzying amount of highly sensitive data for decades to come; data which can (and has) been used to arrest and deport people.

No trend of correlation between electronic monitoring and decrease in physical detention

The Vera Institute found no general trend across jurisdictions that usage of EM led to a decrease in the physically incarcerated population. While the Vera Institute noted a tenfold increase in the number of individuals subjected to EM from 2005 and 2022, the physically incarcerated population only decreased by about 15%. Moreover, the incarcerated population decline is in large part due to COVID-19 directives, and it's unclear whether the downward trend will continue absent those restrictions.

Similarly, despite ICE’s use of EM being dubbed an “alternative to detention” (ATD), the rise of ATD program budgets has not coincided with a decrease in detention. Meanwhile, the programs have historically been used on “individuals who have been released from detention or who were never detained in the first place,” meaning they affect those who would otherwise be free from physical detention.

Electronic monitoring is an all-encompassing form of surveillance for the person being monitored. It tracks every movement they make, records some of the most private data from their daily life, and effectively serves as a “form of incarceration that happens outside of prison walls.”

Notably, EM technology has become more invasive and extensive. Traditional EM technology consisted of wearable devices equipped with Global Positioning System (GPS), radio frequency (RF), or Secure Continuous Remote Alcohol Monitoring (SCRAM) capabilities. However, newer technologies used by ICE and the criminal justice system may additionally employ facial recognition technology, voice recognition technology, and the gathering of real-time location tracking and various other biometrics via independent devices or mobile phone applications.

The lack of correlation between EM and decarceration and the advancement in EM technology suggests that EM, rather than serving as an alternative to detention, is merely another tool in the government's arsenal of carceral control. 

Decreasing carceral control

And yet, it is possible to decrease the population subject to physical incarceration as well as that on EM. In response to the social distancing requirements at the beginning of the COVID-19 epidemic, Salt Lake City released hundreds of people, decreasing the number of people in the Salt Lake County jail by 45%. Because the Sheriff’s Prison Labor Detail program, which administers EM for those in jail on low-level and nonviolent offenses, draws its participants from those still in Salt Lake City jails, the drop in jail population similarly affected EM eligibility.

This simultaneous reduction in both the physically incarcerated population and those subject to EM contrasted with other jurisdictions’ programs, which saw a sharp spike in the number of individuals subjected to EM in the wake of COVID-19, such as that by the Federal Bureau of Prisons.

Portland, Oregon was another location in which the jail population and EM population fell concurrently. In the wake of the killings of George Floyd and Breonna Taylor, the Multnomah County Department of Community Justice found that the EM had a disproportionate impact on communities of color. This led Portland officials to express a desire to pause resuming pre-pandemic levels of EM, which they recognized perpetuates the same obstacles to freedom and injustice as our carceral system and “generally has few rehabilitative benefits.

A worrying trend gets worse

Electronic monitoring is an all-encompassing form of surveillance for the person being monitored. It tracks every movement they make, records some of the most private data from their daily life, and effectively serves as a “form of incarceration that happens outside of prison walls.” And like other types of prison tech in the United States, it’s largely unregulated, disproportionately targeted at Black and Brown people and immigrant communities, and exploitative of the people it claims to serve. It also fails to address many of the problems its advocates and marketers claim it solves. Despite being touted as an alternative to incarceration, EM frequently targets people who would otherwise not be detained. Despite being sold as a cost-saving measure, its price is often paid by those forced to use it.

Electronic monitoring generally requires some forms of data collection, and usually this involves some of the most sensitive data we produce: biometric, location, and personally identifying information. Some EM apps go beyond collecting what’s absolutely necessary from a user’s phone, and many include language in their privacy policies that allows for sharing data for marketing purposes, as well as with law enforcement without a warrant. This amount of data collection and sharing is appalling even when a user can fully consent to an app’s terms, much less when someone is coerced by the state to comply with them. ICE’s data collection and retention policies are particularly odious, and the 75-year retention policy for EM data should be revised.

The recent explosion in the popularity of EM, especially within ICE’s ATD programs, continues a disturbing trend. The Vera Institute’s report helps to shine a light on this pervasive and unregulated industry, but it shouldn’t be this hard to determine how prevalent EM’s use is. People have the right to know how their criminal justice system functions, and that right extends to the private companies who profiteer from it. The report concludes by suggesting a number of policy recommendations, including national reporting requirements for EM's use, prohibition of private vendors running EM programs, and an elimination of user fees. We think these represent the minimum of what must be done: lawmakers must do much more to protect people from privacy violations and ensure that EM doesn't extend the harms of incarceration to those who would otherwise be free from physical detention.

Hannah Zhao

Defending Access to the Decentralized Web

1 week ago

Decentralized web technologies have the potential to make the internet more robust and efficient, supporting a new wave of innovation. However, the fundamental technologies and services that make it work are already being hit with overreaching legal threats.

Exhibit A: the Interplanetary File System (IPFS). IPFS operates via a “distributed hash table,” essentially a way to look up the number (or “hash”) corresponding to a given file and see which network locations have chosen to offer the file. Using the hash, a machine then learns where to request the file from, and then retrieves it in pieces from those locations. IPFS gateways in particular perform these functions on behalf of a user who tells it what hash to retrieve the file for. It’s a conduit, like a traditional proxy server, virtual private network, or ISP.

Our client, computer scientist Mike Damm, offers a free IPFS gateway. He doesn’t control how people user it or what files they access. But a company called Jetbrains insists that that Mr. Damm could be liable under Section 1201 of the Digital Millennium Copyright Act because JetBrains’ lawyers are allegedly able to use his gateway to request and retrieve software keys for Jetbrains software from the IPFS network.

We were glad to have the opportunity to set them straight.

Section 1201 is a terrible law, but it doesn’t impose liability on a general-purpose conduit for information. First, a conduit does not fall into any of the three categories of trafficking under Section 1201: its primary purpose is not circumvention, it has extensive other uses, and it is not marketed for circumvention. Second, Congress has expressly recognized the need to protect conduits from legal risk given their crucial role in supporting the basic functioning of the internet. In Section 512(a) of the DMCA, Congress singled out conduits to receive the highest level of safe harbor protection, recognizing that the ability to dispose of copyright claims at an early stage of litigation was crucial to the operation of these services. It would be absurd to suggest that Congress granted conduits special immunity for copyright claims based on third party activity but then, in the same statute, made them liable for pseudo-copyright Section 1201 claims.

The DMCA has serious flaws, but one thing Congress got right was protecting basic infrastructure providers from being liable for the way that third parties choose to use them. This is in line with longstanding legal principles whereby courts require plaintiffs to target their complaints towards the individuals choosing to misuse general-purpose services, rather than assigning blame to service providers.

Deviating from this rule could have extinguished the internet in its infancy and threatens to do the same with new information technologies. As always, EFF stands ready to defend the open web.

Kit Walsh

Don’t Fall for the Latest Changes to the Dangerous Kids Online Safety Act 

1 week 5 days ago

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version this week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 
What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not like some lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

Jason Kelley

Hip Hip Hooray For Hipster Antitrust

1 week 6 days ago

Don’t believe the hype.

The undeniable fact is that the FTC has racked up a long list of victories over corporate abuses, like busting a nationwide, decades-long fraud that tricked people into paying for “free” tax preparation.

The wheels of justice grind slowly, so many of the actions the FTC has brought are still pending. But these actions are significant. In tandem with the Department of Justice, it is suing over fake apartment listings, blocking noncompete clauses, targeting fake online reviews, and going after gig work platforms for ripping off their workers.

Companies that abuse our privacy and trust are being hit with massive fines: $520 million for Epic’s tricks to get kids to spend money online, $20 million to punish Microsoft for spying on kids who use Xboxes, and a $25 million fine against Amazon for capturing voice recordings of kids and storing kids’ location data.

The FTC is using its authority to investigate many forms of digital deception, from deceptive and fraudulent online ads to the use of cloud computing to lock in business customers to data brokers’ sale of our personal information.

And of course, the FTC is targeting anticompetitive mergers, like Nvidia’s attempted takeover of ARM - which has the immediate effect of preventing an anticompetitive merger and the long-term benefit of deterring future attempts at similar oligopolistic mergers. They’ve also targeted private equity “rollups,” which combine  dozens or hundreds of smaller companies into a monopoly with pricing power over its customers and the whip hand over its workers. These kinds of rollups are all too common, and destructive of offline and online services alike.

From Right to Repair to Click to Cancel to fines for deceptive UI (“dark patterns”), the FTC has taken up many of the issues we’ve fought for over the years. So the argument that the FTC is a do-nothing agency wasting our time with grandstanding stunts is just factually wrong. As recently as  December 2023, the FTC  and DOJ chalked up ten major victories

But this “win/loss ratio” accounting also misses the point. Even if the outcome isn’t guaranteed, this FTC refuses to turn a blind eye  to abuses of the American public. 

What’s more, the FTC collaborated with the DOJ on new merger guidelines that spell out what kinds of mergers are likely to be legal. These are the most comprehensive, future-looking guidelines in generations, and they tee up enforcement actions for this FTC and its successors for many years to come.

The FTC is also seeking to revive existing laws that have lane dormant for too long. . As John Mark Newman explains, this FTC has cannily filed cases that reassert its right to investigate “competing” companies with interlocking directorates.

Newman also praises the FTC for “supercharging student interest in the field,” with law schools seeing surging interest in antitrust courses and a renaissance in law review articles about antitrust enforcement. 

The FTC is not alone in this. Its colleagues in the DOJ’s antitrust division have their own long list of victories.

But the most important victory for America’s antitrust enforcers is what doesn’t happen. Across the economy and every sector, corporate leaders are backing away from merger-driven growth and predatory pricing, deterred from violating the law by the knowledge that the generations-long period of tolerance for lawless corporate abuse is coming to a close.

Even better, America’s antitrust enforcers don’t stand alone. At long last, it seems that the whole world is reversing decades of tacit support for oligopolies and corporate bullying. 

Cory Doctorow

EFF to Court: Strike Down Age Estimation in California But Not Consumer Privacy

1 week 6 days ago

The Electronic Frontier Foundation (EFF) called on the Ninth Circuit to rule that California’s Age Appropriate Design Code (AADC) violates the First Amendment, while not casting doubt on well-written data privacy laws. EFF filed an amicus brief in the case NetChoice v. Bonta, along with the Center for Democracy & Technology.

A lower court already ruled the law is likely unconstitutional. EFF agrees, but we asked the appeals court to chart a narrower path. EFF argued the AADC’s age estimation scheme and vague terms that describe amorphous “harmful content” render the entire law unconstitutional. But the lower court also incorrectly suggested that many foundational consumer privacy principles cannot pass First Amendment scrutiny. That is a mistake that EFF asked the Ninth Circuit to fix.

In late 2022, California passed the AADC with the goal of protecting children online. It has many data privacy provisions that EFF would like to see in a comprehensive federal privacy bill, like data minimization, strong limits on the processing of geolocation data, regulation of dark patterns, and enforcement of privacy policies.

Government should provide such privacy protections to all people. The protections in the AADC, however, are only guaranteed to children. And to offer those protections to children but not adults, technology companies are strongly incentivized to “estimate the age” to their entire user base—children and adults alike. While the method is not specified, techniques could include submitting a government ID or a biometric scan of your face. In addition, technology companies are required to assess their products to determine if they are designed to expose children to undefined “harmful content” and determine what is in the undefined “best interest of children.”

In its brief, EFF argued that the AADC’s age estimation scheme raises the same problems as other age verification laws that have been almost universally struck down, often with help from EFF. The AADC burdens adults’ and children’s access to protected speech and frustrates all users’ right to speak anonymously online. In addition, EFF argued that the vague terms offer no clear standards, and thus give government officials too much discretion in deciding what conduct is forbidden, while incentivizing platforms to self-censor given uncertainty about what is allowed.

“Many internet users will be reluctant to provide personal information necessary to verify their ages, because of reasonable doubts regarding the security of the services, and the resulting threat of identity theft and fraud,” EFF wrote.

Because age estimation is essential to the AADC, the entire law should be struck down for that reason alone, without assessing the privacy provisions. EFF asked the court to take that narrow path.

If the court instead chooses to address the AADC’s privacy protections, EFF cautioned that many of the principles reflected in those provisions, when stripped of the unconstitutional censorship provisions and vague terms, could survive intermediate scrutiny. As EFF wrote:

“This Court should not follow the approach of the district court below. It narrowly focused on California’s interest in blocking minors from harmful content. But the government often has several substantial interests, as here: not just protection of information privacy, but also protection of free expression, information security, equal opportunity, and reduction of deceptive commercial speech. The privacy principles that inform AADC’s consumer data privacy provisions are narrowly tailored to these interests.”

EFF has a long history of supporting well-written privacy laws against First Amendment attacks. The AADC is not one of them. We have filed briefs supporting laws that protect video viewing history, biometric data, and other internet records. We have advocated for a federal law to protect reproductive health records. And we have written extensively on the need for a strong federal privacy law.

Mario Trujillo

Privacy Isn't Dead. Far From It.

2 weeks ago

Welcome! 

The fact that you’re reading this means that you probably care deeply about the issue of privacy, which warms our hearts. Unfortunately, even though you care about privacy, or perhaps because you care so much about it, you may feel that there's not much you (or anyone) can really do to protect it, no matter how hard you try. Perhaps you think “privacy is dead.” 

We’ve all probably felt a little bit like you do at one time or another. At its worst, this feeling might be described as despair. Maybe it hits you because a new privacy law seems to be too little, too late. Or maybe you felt a kind of vertigo after reading a news story about a data breach or a company that was vacuuming up private data willy-nilly without consent. 

People are angry because they care about privacy, not because privacy is dead.

Even if you don’t have this feeling now, at some point you may have felt—or possibly will feel—that we’re past the point of no return when it comes to protecting our private lives from digital snooping. There are so many dangers out there—invasive governments, doorbell cameras, license plate readers, greedy data brokers, mismanaged companies that haven’t installed any security updates in a decade. The list goes on.

This feeling is sometimes called “privacy nihilism.” Those of us who care the most about privacy are probably more likely to get it, because we know how tough the fight is. 

We could go on about this feeling, because sometimes we at EFF have it, too. But the important thing to get across is that this feeling is valid, but it’s also not accurate. Here’s why.

You Aren’t Fighting for Privacy Alone

For starters, remember that none of us are fighting alone. EFF is one of dozens, if not hundreds,  of organizations that work to protect privacy.  EFF alone has over thirty-thousand dues-paying members who support that fight—not to mention hundreds of thousands of supporters subscribed to our email lists and social media feeds. Millions of people read EFF’s website each year, and tens of millions use the tools we’ve made, like Privacy Badger. Privacy is one of EFF’s biggest concerns, and as an organization we have grown by leaps and bounds over the last two decades because more and more people care. Some people say that Americans have given up on privacy. But if you look at actual facts—not just EFF membership, but survey results and votes cast on ballot initiatives—Americans overwhelmingly support new privacy protections. In general, the country has grown more concerned about how the government uses our data, and a large majority of people say that we need more data privacy protections. 

People are angry because they care about privacy, not because privacy is dead.

Some people also say that kids these days don’t care about their privacy, but the ones that we’ve met think about privacy a lot. What’s more, they are fighting as hard as anyone to stop privacy-invasive bills like the Kids Online Safety Act. In our experience, the next generation cares intensely about protecting privacy, and they’re likely to have even more tools to do so. 

Laws are Making Their Way Around the World

Strong privacy laws don’t cover every American—yet. But take a look at just one example to see how things are improving: the California Consumer Privacy Act of 2018 (CCPA). The CCPA isn’t perfect, but it did make a difference. The CCPA granted Californians a few basic rights when it comes to their relationship with businesses, like the right to know what information companies have about you, the right to delete that information, and the right to tell companies not to sell your information. 

This wasn’t a perfect law for a few reasons. Under the CCPA, consumers have to go company-by-company to opt out in order to protect their data. At EFF, we’d like to see privacy and protection as the default until consumers opt-in. Also, CCPA doesn’t allow individuals to sue if their data is mismanaged—only California’s Attorney General and the California Privacy Protection Agency can do it. And of course, the law only covers Californians. 

Remember that it takes time to change the system.

But this imperfect law is slowly getting better. Just this year California’s legislature passed the DELETE Act, which resolves one of those issues. The California Privacy Protection Agency now must create a deletion mechanism for data brokers that allows people to make their requests to every data broker with a single, verifiable consumer request. 

Pick a privacy-related topic, and chances are good that model bills are being introduced, or already exist as laws in some places, even if they don’t exist everywhere. The Illinois Biometric Information Privacy Act, for example, passed back in 2008, protects people from nonconsensual use of their biometrics for face recognition. We may not have comprehensive privacy laws yet in the US, but other parts of the world—like Europe—have more impactful, if imperfect, laws. We can have a nationwide comprehensive consumer data privacy law, and once those laws are on the books, they can be improved.  

We Know We’re Playing the Long Game

Remember that it takes time to change the system. Today we take many protections for granted, and often assume that things are only getting worse, not better. But many important rights are relatively new. For example, our Constitution didn’t always require police to get a warrant before wiretapping our phones. It took the Supreme Court four decades to get this right. (They were wrong in 1928 in Olmstead, then right in 1967 in Katz.)

Similarly, creating privacy protections in law and in technology is not a sprint. It is a marathon. The fight is long, and we know that. Below, we’ve got examples of the progress that we’ve already made, in law and elsewhere. 

Just because we don’t have some protective laws today doesn’t mean we can’t have them tomorrow. 

Privacy Protections Have Actually Increased Over the Years The World Wide Web is Now Encrypted 

When the World Wide Web was created, most websites were unencrypted. Privacy laws aren’t the only way to create privacy protections, as the now nearly-entirely encrypted web shows:  another approach is to engineer in strong privacy protections from the start. 

The web has now largely switched from non-secure HTTP to the more secure HTTPS protocol. Before this happened, most web browsing was vulnerable to eavesdropping and content hijacking. HTTPS fixes most of these problems. That's why EFF, and many like-minded supporters, pushed for web sites to adopt HTTPS by default. As of 2021, about 90% of all web page visits use HTTPS. This switch happened in under a decade. This is a big win for encryption and security for everyone, and EFF's Certbot and HTTPS Everywhere are tools that made it happen, by offering an easy and free way to switch an existing HTTP site to HTTPS. (With a lot of help from Let’s Encrypt, started in 2013 by a group of determined researchers and technologists from EFF and the University of Michigan.) Today, it’s the default to implement HTTPS. 

Cell Phone Location Data Now Requires a Warrant

In 2018, the Supreme Court handed down a landmark opinion in Carpenter v. United States, ruling 5-4 that the Fourth Amendment protects cell phone location information. As a result, police must now get a warrant before obtaining this data. 

But where else this ruling applies is still being worked out. Perhaps the most significant part of the ruling is its explicit recognition that individuals can maintain an expectation of privacy in information that they provide to third parties. The Court termed that a “rare” case, but it’s clear that other invasive surveillance technologies, particularly those that can track individuals through physical space, are now ripe for challenge. Expect to see much more litigation on this subject from EFF and our friends.

Americans’ Outrage At Unconstitutional Mass Surveillance Made A Difference

In 2013, government contractor Edward Snowden shared evidence confirming, among other things, that the United States government had been conducting mass surveillance on a global scale, including surveillance of its own citizens’ telephone and internet use. Ten years later, there is definitely more work to be done regarding mass surveillance. But some things are undoubtedly better: some of the National Security Agency’s most egregiously illegal programs and authorities have shuttered or been forced to end. The Intelligence Community has started affirmatively releasing at least some important information, although EFF and others have still had to fight some long Freedom of Information Act (FOIA) battles.

Privacy Options Are So Much Better Today

Remember PGP and GPG? If you do, you know that generally, there are much easier ways to send end-to-end encrypted communications today than there used to be. It’s fantastic that people worked so hard to protect their privacy in the past, and it’s fantastic that they don’t have to work as hard now! (If you aren’t familiar with PGP or GPG, just trust us on this one.) 

Don’t give in to privacy nihilism. Instead, share and celebrate the ways we’re winning. 

Advice for protecting online privacy used to require epic how-to guides for complex tools; now, advice is usually just about what relatively simple tools or settings to use. People across the world have Signal and WhatsApp. The web is encrypted, and the Tor Browser lets people visit websites anonymously fairly easily. Password managers protect your passwords and your accounts; third-party cookie blockers like EFF’s Privacy Badger stop third-party tracking. There are even options now to turn off your Ad ID—the key that enables most third-party tracking on mobile devices—right on your phone. These tools and settings all push the needle forward.

We Are Winning The Privacy War, Not Losing It

Sometimes people respond to privacy dangers by comparing them to sci-fi dystopias. But be honest: most science fiction dystopias still scare the heck out of us because they are much, much more invasive of privacy than the world we live in. 

In an essay called “Stop Saying Privacy Is Dead,” Evan Selinger makes a necessary point: “As long as you have some meaningful say over when you are watched and can exert agency over how your data is processed, you will have some modicum of privacy.” 

Of course we want more than a modicum of privacy. But the point here is that many of us generally do get to make decisions about our privacy. Not all—of course. But we all recognize that there are different levels of privacy in different places, and that privacy protections aren’t equally good or bad no matter where we go. We have places we can go—online and off—that afford us more protections than others. And because of this, most of the people reading this still have deep private lives, and can choose, with varying amounts of effort, not to allow corporate or government surveillance into those lives. 

Worrying about every potential threat, and trying to protect yourself from each of them, all of the time, is a recipe for failure.

Privacy is a process, not a single thing. We are always negotiating what levels of privacy we have. We might not always have the upper hand, but we are often able to negotiate. This is why we still see some fictional dystopias and think, “Thank God that’s not my life.” As long as we can do this, we are winning. 

“Giving Up” On Privacy May Not Mean Much to You, But It Does to Many

Shrugging about the dangers of surveillance can seem reasonable when that surveillance isn’t very impactful on our lives. But for many, fighting for privacy isn't a choice, it is a means to survive. Privacy inequity is real; increasingly, money buys additional privacy protections. And if privacy is available for some, then it can exist for all. But we should not accept that some people will have privacy and others will not. This is why digital privacy legislation is digital rights legislation, and why EFF is opposed to data dividends and pay-for-privacy schemes.

Privacy increases for all of us when it increases for each of us. It is much easier for a repressive government to ban end-to-end encrypted messengers when only journalists and activists use them. It is easier to know who is an activist or a journalist when they are the only ones using privacy-protecting services or methods. As the number of people demanding privacy increases, the safer we all are. Sacrificing others because you don't feel the impact of surveillance is a fool's bargain. 

Time Heals Most Privacy Wounds

You may want to tell yourself: companies already know everything about me, so a privacy law a year from now won't help. That's incorrect, because companies are always searching for new data. Some pieces of information will never change, like our biometrics. But chances are you've changed in many ways over the years—whether that's as big as a major life event or as small as a change in your tastes in movies—but who you are today is not necessarily you'll be tomorrow.

As the source of that data, we should have more control over where it goes, and we’re slowly getting it. But that expiration date means that even if some of our information is already out there, it’s never going to be too late to shut off the faucet. So if we pass a privacy law next year, it’s not the case that every bit of information about you has already leaked, so it won’t do any good. It will.

What To Do When You Feel Like It’s Impossible

It can feel overwhelming to care about something that feels like it’s dying a death of a thousand cuts. But worrying about every potential threat, and trying to protect yourself from each of them, all of the time, is a recipe for failure. No one really needs to be vigilant about every threat at all times. That’s why our recommendation is to create a personalized security plan, rather than throwing your hands up or cowering in a corner. 

Once you’ve figured out what threats you should worry about, our advice is to stay involved. We are all occasionally skeptical that we can succeed, but taking action is a great way to get rid of that gnawing feeling that there’s nothing to be done. EFF regularly launches new projects that we hope will help you fight privacy nihilism. We’re in court many times a year fighting privacy violations. We create ways for like-minded, privacy-focused people to work together in their local advocacy groups, through the Electronic Frontier Alliance, our grassroots network of community and campus organizations fighting for digital rights. We even help you teach others to protect their own privacy. And of course every day is a good day for you to join us in telling government officials and companies that privacy matters. 

We know we can win because we’re creating the better future that we want to see every day, and it’s working. But we’re also building the plane while we’re flying it. Just as the death of privacy is not inevitable, neither is our success. It takes real work, and we hope you’ll help us do that work by joining us. Take action. Tell a friend. Download Privacy Badger. Become an EFF member. Gift an EFF membership to someone else.

Don’t give in to privacy nihilism. Instead, share and celebrate the ways we’re winning. 

Jason Kelley

Voting Against the Surveillance State | EFFector 36.2

2 weeks 1 day ago

EFF is here to keep you up-to-date with the latest news about your digital rights! EFFector 36.2 is out now and covers a ton of the latest news, including: a victory, as Amazon's Ring will no longer facilitate warrantless footage requests from police; an analysis on Apple's announcement to support RCS on iPhones; and a call for San Francisco voters to vote no on Proposition E on the March 5, 2024 ballot.

You can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFector 36.2 | Voting Against the Surveillance State

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Helps News Organizations Push Back Against Legal Bullying from Cyber Mercenary Group

2 weeks 5 days ago

Cyber mercenaries present a grave threat to human rights and freedom of expression. They have been implicated in surveillance, torture, and even murder of human rights defenders, political candidates, and journalists. One of the most effective ways that the human rights community pushes back against the threat of targeted surveillance and cyber mercenaries is to investigate and expose these companies and their owners and customers. 

But for the last several months, there has emerged a campaign of bullying and censorship seeking to wipe out stories about the mercenary hacking campaigns of a less well-known company, Appin Technology, in general, and the company’s cofounder, Rajat Khare, in particular. These efforts follow a familiar pattern: obtain a court order in a friendly international jurisdiction and then misrepresent the force and substance of that order to bully publishers around the world to remove their stories.

We are helping to push back on that effort, which seeks to transform a very limited and preliminary Indian court ruling into a global takedown order. We are representing Techdirt and MuckRock Foundation, two of the news entities asked to remove Appin-related content from their sites. On their behalf, we challenged the assertions that the Indian court either found the Reuters reporting to be inaccurate or that the order requires any entities other than Reuters and Google to do anything. We requested a response – so far, we have received nothing.

Background

If you worked in cybersecurity in the early 2010’s, chances are that you remember Appin Technology, an Indian company offering information security education and training with a sideline in (at least according to many technical reports) hacking-for-hire. 

On November 16th, 2023, Reuters published an extensively-researched story titled “How an Indian Startup Hacked the World” about Appin Technology and its cofounder Rajat Khare. The story detailed hacking operations carried out by Appin against private and government targets all over the world while Khare was still involved with the company. The story was well-sourced, based on over 70 original documents and interviews with primary sources from inside Appin. But within just days of publication, the story—and many others covering the issue—disappeared from most of the web.

On December 4th, an Indian court preliminarily ordered Reuters to take down their story about Appin Technology and Khare while a case filed against them remains pending in the court. Reuters subsequently complied with the order and took the story offline. Since then dozens of other journalists have written about the original story and about the takedown that followed. 

At the time of this writing, more than 20 of those stories have been taken down by their respective publications, many at the request of an entity called “Association of Appin Training Centers (AOATC).” Khare’s lawyers have also sent letters to news sites in multiple countries demanding they remove his name from investigative reports. Khare’s lawyers also succeeded in getting Swiss courts to issue an injunction against reporting from Swiss public television, forcing them to remove his name from a story about Qatar hiring hackers to spy on FIFA officials in preparation for the World Cup. Original stories, cybersecurity reports naming Appin, stories about the Reuters story, and even stories about the takedown have all been taken down. Even the archived version of the Reuters story was taken down from archive.org in response to letters sent by the Association of Appin Training Centers.

One of the letters sent by AOATC to Ron Deibert, the founder and director of Citizen Lab, reads:

Ron Deibert had the following response:

Not everyone has been as confident as Ron Deibert. Some of the stories that were taken down have been replaced with a note explaining the takedown, while others were redacted into illegibility, such as the story from Lawfare:

It is not clear who is behind The Association of Appin Training Centers, but according to documents surfaced by Reuters, the organization didn’t exist until after the lawsuit was filed against Reuters in Indian court. Khare’s lawyers have denied any connection between Khare and the training center organization. Even if this is true, it is clear that the goals of both parties are fundamentally aligned in silencing any negative press covering Appin or Rajat Khare.  

Regardless of who is behind the Association of Appin Training Centers, the links between Khare and Appin Technology are extensive and clear. Khare continues to claim that he left Appin in 2013, before any hacking-for-hire took place. However, Indian corporate records demonstrate that he stayed involved with Appin long after that time. 

Khare has also been the subject of multiple criminal investigations. Reuters published a sworn 2016 affidavit by Israeli private investigator Aviram Halevi in which he admits hiring Appin to steal emails from a Korean businessman. It also published a 2012 Dominican prosecutor’s filing which described Khare as part of an alleged hacker’s “international criminal network.” A publicly available criminal complaint filed with India’s Central Bureau of Investigation shows that Khare is accused, with others, of embezzling nearly $100 million from an Indian education technology company. A Times of India story from 2013 notes that Appin was investigated by an unnamed Indian intelligence agency over alleged “wrongdoings.”

Response to AOATC

EFF is helping two news organizations stand up to the Association of Appin Training Centers’ bullying—Techdirt and Muckrock Foundation. 

Techdirt received a similar request to the one Ron Diebert received, after it published an article about the Reuters takedown, but then also received the following emails:

Dear Sir/Madam,

I am writing to you on behalf of Association of Appin Training Centers in regards to the removal of a defamatory article running on https://www.techdirt.com/ that refers to Reuters story, titled: “How An Indian Startup Hacked The World” published on 16th November 2023.

As you must be aware, Reuters has withdrawn the story, respecting the order of a Delhi court. The article made allegations without providing substantive evidence and was based solely on interviews conducted with several people.

In light of the same, we request you to kindly remove the story as it is damaging to us.

Please find the URL mentioned below.

https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

Thanks & Regards

Association of Appin Training Centers

And received the following email twice, roughly two weeks apart:

Hi Sir/Madam

This mail is regarding an article published on your website,

URL : https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

dated on 7th Dec. 23 .

As you have stated in your article, the Reuters story was declared defamatory by the Indian Court which was subsequently removed from their website.

However, It is pertinent to mention here that you extracted a portion of your article from the same defamatory article which itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.

You are advised to remove this article from your website with immediate effect.

 

Thanks & Regards

Association of Appin Training Centers

We responded to AOATC on behalf of Techdirt and MuckRock Foundation to the “requests for assistance” which were sent to them, challenging AOATC’s assertions about the substance and effect of the Indian court interim order. We pointed out that the Indian court order is only interim and not a final judgment that Reuters’ reporting was false, and that it only requires Reuters and Google to do anything. Furthermore, we explained that even if the court order applied to MuckRock and Techdirt, the order is inconsistent with the First Amendment and would be unenforceable in US courts pursuant to the SPEECH Act:

To the Association of Appin Training Centers:

We represent and write on behalf of Techdirt and MuckRock Foundation (which runs the DocumentCloud hosting services), each of which received correspondence from you making certain assertions about the legal significance of an interim court order in the matter of Vinay Pandey v. Raphael Satter & Ors. Please direct any future correspondence about this matter to me.

We are concerned with two issues you raise in your correspondence.

First, you refer to the Reuters article as containing defamatory materials as determined by the court. However, the court’s order by its very terms is an interim order, that indicates that the defendants’ evidence has not yet been considered, and that a final determination of the defamatory character of the article has not been made. The order itself states “this is only a prima-facie opinion and the defendants shall have sufficient opportunity to express their views through reply, contest in the main suit etc. and the final decision shall be taken subsequently.”

Second, you assert that reporting by others of the disputed statements made in the Reuters article “itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.” But, again by its plain terms, the court’s interim order applies only to Reuters and to Google. The order does not require any other person or entity to depublish their articles or other pertinent materials. And the order does not address its effect on those outside the jurisdiction of Indian courts. The order is in no way the global takedown order your correspondence represents it to be. Moreover, both Techdirt and MuckRock Foundation are U.S. entities. Thus, even if the court’s order could apply beyond the parties named within it, it will be unenforceable in U.S. courts to the extent it and Indian defamation law is inconsistent with the First Amendment to the U.S. Constitution and 47 U.S.C. § 230, pursuant to the SPEECH Act, 28 U.S.C. § 4102. Since the First Amendment would not permit an interim depublication order in a defamation case, the Pandey order is unenforceable.

If you disagree, please provide us with legal authority so we can assess those arguments. Unless we hear from you otherwise, we will assume that you concede that the order binds only Reuters and Google and that you will cease asserting otherwise to our clients or to anyone else.

We have not yet received any response from AOATC. We hope that others who have received takedown requests and demands from AOATC will examine their assertions with a critical eye.  

If a relatively obscure company like AOATC or an oligarch like Rajat Khare can succeed in keeping their name out of the public discourse with strategic lawsuits, it sets a dangerous precedent for other larger, better-resourced, and more well-known companies such as Dark Matter or NSO Group to do the same. This would be a disaster for civil society, a disaster for security research, and a disaster for freedom of expression.

Cooper Quintin

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

3 weeks ago

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirement—including an intent to cause damage, defraud, or harm—is needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)–a minor infraction ordinarily considered a civil issue–could be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as changing one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblocking–which can be done innocently simply by changing location–should not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers—whether within organizations or independent—discover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)
Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl, Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundación Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

Karen Gullo

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

3 weeks ago

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6, which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems. This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

Karen Gullo

What is Proposition E and Why Should San Francisco Voters Oppose It?

3 weeks 4 days ago

If you live in San Francisco, there is an election on March 5, 2024 during which voters will decide a number of specific local ballot measures—including Proposition E. Proponents of Proposition E have raised over $1 million …but what does the measure actually do? This will break down what the initiative actually does, why it is dangerous for San Franciscans, and why you should oppose it.

What Does Proposition E Do?

Proposition E is a “kitchen sink" approach to public safety that capitalizes on residents’ fear of crime in an attempt to gut common-sense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Proposition E would also amend existing laws passed in 2019 to protect San Franciscans from invasive, untested, or biased police technologies.

Currently, if police want to acquire a new technology, they have to go through a procedure known as CCOPS—Community Control Over Police Surveillance. This means that police need to explain why they need a new piece of technology and provide a detailed use policy to the democratically-elected Board of Supervisors, who then vote on it. The process also allows for public comment so people can voice their support for, concerns about, or opposition to the new technology. This process is in no way designed to universally deny police new technologies. Instead, it ensures that when police want new technology that may have significant impacts on communities, those voices have an opportunity to be heard and considered. San Francisco police have used this procedure to get new technological capabilities as recently as Fall 2022 in a way that stimulated discussion, garnered community involvement and opposition (including from EFF), and still passed.

Proposition E guts these common-sense protective measures designed to bring communities into the conversation about public safety. If Proposition E passes on March 5, then the SFPD can use any technology they want for a full year without publishing an official policy about how they’d use the technology or allowing community members to voice their concerns—or really allowing for any accountability or transparency at all.

Why is Proposition E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency or accountability. San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under the current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Proposition E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

It’s not just that these technologies could potentially harm San Franciscans by, for instance, directing armed police at them due to reliance on a faulty algorithm or putting already-marginalized communities at further risk of overpolicing and surveillance—it’s also important to note that studies find that these technologies just don’t work. Police often look to technology as a silver bullet to fight crime, despite evidence suggesting otherwise. Oversight over what technology the SFPD uses doesn’t just allow for scrutiny of discriminatory and biased policing, it also introduces a much-needed dose of reality. If police want to spend hundreds of thousands of dollars a year on software that has a success rate of .6% at predicting crime, they should have to go through a public process before they fork over taxpayer dollars. 

What Technology Would Proposition E Allow the Police to Use?

That's the thing—we don't know, and if Proposition E passes, we may never know. Today, if police decide to use a piece of surveillance technology, there is a process for sharing that information with the public. With Proposition E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. Even though we don't know what technologies the SFPD are eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And According to the City Attorney, Proposition E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology.

Why You Should Vote No on Proposition E

San Francisco, like many other cities, has its problems, but none of those problems will be solved by removing oversight over what technologies police spend our public money on and deploy in our neighborhoods—especially when so much police technology is known to be racially biased, invasive, or faulty. Voters should think about what San Francisco actually needs and how Proposion E is more likely to exacerbate the problems of police violence than it is to magically erase crime in the city. This is why we are urging a NO vote on Proposition E on the March 5 ballot.

Matthew Guariglia
Checked
2 hours 14 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed