Congress Amended KOSA, But It's Still A Censorship Bill

1 month 1 week ago

A key Senate committee voted to move forward one of the most dangerous bills we’ve seen in years: the Kids Online Safety Act (KOSA). EFF has opposed the Kids Online Safety Act, S. 1409, because it’s a danger to the rights of all users, both minors and adults. The bill requires all websites, apps, and online platforms to filter and block legal speech. It empowers state attorney generals, who are mostly elected politicians, to file lawsuits based on content they believe will be harmful to young people. 

These fundamental flaws remain in the bill, and EFF and many others continue to oppose it. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 


the "kids online safety act" isn't safe for kids or adults

Before the Senate Commerce Committee voted to move forward the bill on July 27, it incorporated a number of amendments. While none of them change the fundamental problems with KOSA, or our opposition to the bill, we analyze them here. 

The Bill’s Knowledge Standard Has Changed

The first change to the bill is that the knowledge standard has been tightened, so that websites and apps can only be held liable if they actually know there’s a young person using their service. The previous version of the bill regulated any online platform that was used by minors, or was “reasonably likely to be used” by a minor. 

The previous version applied to a huge swath of the internet, since the view of what sites are “reasonably likely to be used” by a minor would be up to attorney generals. Other than sites that took big steps, like requiring age verification, almost any site could be “reasonably likely” to be used by a minor. 

Requiring actual knowledge of minors is an improvement, but the protective effect is small. A site that was told, for instance, that a certain proportion of its users were minors—even if those minors were lying to get access—could be sued by the state. The site might be held liable even if there was one minor user they knew about, perhaps one they’d repeatedly kicked off. 

The bill still effectively regulates the entire internet that isn’t age-gated. KOSA is fundamentally a censorship bill, so we’re concerned about its effects on any website or service—whether they’re meant to serve solely adults, solely kids, or both. 

Pushing A Chronological Feed Won’t Help 

Another significant change to the bill is a longer amendment from Sen. John Thune (R-SD), who railed against “filter bubbles” during the markup hearing. Thune’s amendment requires larger platforms to provide an algorithm that doesn’t use any user data whatsoever. The amendment would prevent websites and apps from using even basic information, like what city a person lives in, to decide what kind of information to prioritize. 

The Thune amendment is meant to push users towards a chronological feed, which Sen. Thune called during the hearing a “normal chronological feed.” There’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers.  

There’s also no evidence that chronological feeds make for better or healthier content consumption. A recently published major study on Facebook data specifically studied the effects of a chronological feed, and found that a chronological feed “increased the share of content from designated untrustworthy sources by more than two-thirds relative to the Algorithmic Feed.” 

KOSA Could Be Replaced With An Actually Good Bill On Targeted Ads

A small part of KOSA deals with targeted advertising. It would require disclosures about things like “why the minor is being targeted with a particular advertisement.” 

This part of the bill is actually a positive step—protecting users’ privacy, rather than imposing censorship on the content they can access. But as the only privacy-protective part of the bill, it’s pathetically small. 

At this point in the internet’s history, we need more than mild disclosure requirements and more studies about behavioral ads. They should be banned altogether. And there’s no reason to limit the ban to minors; behavioral ads are tracking the mouse clicks and browsing history of users of all ages. 

A bill that worked to protect internet users by limiting tracking and protecting privacy would be great. That’s not KOSA, which barely even gestures at privacy protections, while offering politicians and police a comprehensive suite of censorship options. 

Other KOSA Amendments Are Minor

  • The amendment marked Lummis 1 specifically exempts a VPN. 
  • The Lummis 2 amendment slightly expands a required government study to look at effects on small businesses. 
  • The Cruz 1 amendment specifies that a wireless messaging exemption includes SMS and MMS messages. 
  • The Cruz 2 amendment changes the word “gender” to “sex.” 
  • The Lujan 1 amendment changes a part of the bill that allows “geolocation” data to be used in certain ways to only allow the use of city-level data. 
  • The Lujan 2 amendment expands the government study portion of the bill to include non-English language users. 

Overall, these small changes to a flawed bill don’t change the basic fact that KOSA is a censorship bill that will harm the rights of both adult and minor users. We oppose it, and urge you to contact your congressperson about it today. 


TELL CONGRESS to oppose kosa

Joe Mullin

California's DELETE Act Protects Us From Data Brokers

1 month 2 weeks ago

When it comes to the reckless trade of our personal information, data brokers are the problem. These entities collect then sell personal information they’ve amassed on individuals with very little oversight. This includes very sensitive information such as buying habits, financial records, social media activity, or precise geolocation information. Scams, identity theft, and financial exploitation result from the collection and misuse of personal information.

Potential misuse of health data could lead to real harms in harassment, discrimination, and legal consequences for those seeking health services in California, including reproductive and gender affirming healthcare data. And if information is sold to local, state, or federal agencies, that puts our Fourth Amendment rights at risk.

That's why EFF is a proud supporter of S.B. 362, authored by California State Senator Josh Becker. It allows people to easily and efficiently make one request to delete their personal information held by all data brokers registered in California. It is also known as the California Delete Act and is sponsored by Privacy Rights Clearinghouse and Californians for Consumer Privacy. It will improve everyone's privacy rights and make California's consumer privacy laws more user-friendly.

Californians have a right to request that companies delete information collected about them. But, logistically speaking, this is difficult. Because California's privacy laws require people to file requests with each individual company that may have their information, it can be an incredibly time-consuming and tedious process. Furthermore, because data brokers buy, sell, and exchange information with so many companies (and each other), it's very hard for anyone to know if a particular company has their information and how to make a deletion request.

S.B. 362 directs the California Privacy Protection Agency to create a deletion mechanism for data brokers that allows someone to make this request of every data broker with a single, verifiable consumer request. This gives us all a much-needed method to exercise our privacy rights. It also helps us gain better control over our data and makes it easier to mitigate the risks that the collection and sale of personal information create in our everyday lives.

We need this legislation to start holding these companies accountable.

The California Delete Act would also strengthen current California law that requires data brokers to register with the state. Specifically, S.B. 362 would require data brokers to report a broader set of information about what data they collect on consumers and strengthen enforcement mechanisms against data brokers who fail to comply with the reporting requirement. Both provisions do important work to shed light on this opaque ecosystem.

The bill is currently headed to the California Assembly Appropriations Committee. We call on California's legislature to pass this bill to empower all Californians to gain a better grasp on their own privacy.

Hayley Tsukayama

Announcing the Tor University Challenge

1 month 2 weeks ago

Tor is a valuable tool for browsing the web anonymously, but since it's powered by volunteers willing to share some bandwidth and a computer, it's always in need of additional help. Which is why EFF is announcing the Tor University Challenge, a project asking universities to start running Tor relays on campus. Today, we're launching with support from 12 universities. With your help, we can add more universities to strengthen the Tor network to improve one of the best free privacy tools available today.

In 2011, we launched our first Tor Challenge, which resulted in 549 new relays. By 2014, after we launched our second Tor Challenge, we had counted 1,635 new relays. This time around, we're focusing on getting more Tor relays onto college campuses. Universities are especially well-suited for Tor relays because they often offer fast internet, have lots of technical expertise available (including professors, students, and IT teams), and value freedom of expression. Setting up a Tor relay on your college campus will help make Tor faster and better, because the more relays that exist, the better the experience of using Tor gets for everyone. 

What is Tor?

Tor is a network and software package that consists of two parts: a web browser you can download to browse the internet, and a volunteer network of computers that make the Tor software work. Using Tor is as simple as downloading the Tor Browser (give it a try yourself, if you haven't used Tor before, it’s available for Windows, Mac, and Linux). Browsing the web is a little slower than you might be used to with Tor, but otherwise works exactly like any other web browser. The Tor Browser also gives you access to Tor onion sites—hidden websites that provide end-to-end encryption and anonymity—that help circumvent censorship. EFF, along with Certbot and our Surveillance Self-Defense guides, are all available as Tor onions.

The second part is the volunteer-run network of computers that anonymizes web traffic. Tor protects your identity by hiding the source and destination of your internet traffic, which helps prevent anyone from knowing who you are or what you're looking at. Tor does this by routing your web traffic through "relays," which, like the name implies, receive the traffic and pass it along to the next relay. Anyone can run a Tor relay on just about any computer, but because relays need a lot of bandwidth, it's not always easy (or possible) to do so. Universities often don't have the sorts of bandwidth limitations the rest of us may contend with, so they're a good fit for relays.

Tor sometimes gets a bad rap because, like any other tool, it can be used by criminals. But criminals have all sorts of other tools that do much of the same things as Tor, and there are a number of very real benefits to society for supporting the Tor Project. For example, Tor is a required component of SecureDrop, a tool used by dozens of news organizations like The Washington Post, ProPublica, and The New York Times to facilitate secure information sharing. Though it's not as well known as Tor, SecureDrop has been used in countless news stories, including The Intercept's reporting on violations of attorney-client privilege of prisoners. Tor is also a vital tool for censorship circumvention that we've seen used in Russia and in Iran, among other places. If you run into skeptics on campus, point to these types of examples to get the conversation moving in a productive direction. 

Get Started

Tor is a critical tool in the fight against online surveillance and government censorship, and with the help of your university, it can become even better than it is today.

If you're a professor, you're already in a great position to start a Tor relay, and can likely get started today. If you're a student, you'll likely need to gather together some faculty allies before getting it running, but we've got tips for doing so here. Establishing a Tor relay on campus is a learning experience for everyone involved, from professors to students, and a great way to find like-minded people to work on similar projects with in the future. A relay can run on just about any computer, and in some cases, requires almost no maintenance once you have the software installed and working. If you want to learn more about the technical details of operating a relay, the Tor Project has a number of guides worth checking out.

After your relay is up and running for a year, send us an email, and we'll send you a challenge coin in return. Head over to our Tor University Challenge site for more information about the relays, frequently asked questions, form letters for finding allies on campus, and more. 

Thorin Klosowski

EFF Launches the Tor University Challenge

1 month 2 weeks ago
Universities answering this call to defend private access to an uncensored web will help millions of people around the world while providing a vital learning experience.

SAN FRANCISCO—Electronic Frontier Foundation (EFF) on Tuesday launched the Tor University Challenge, a campaign urging higher education institutions to support free, anonymous speech by running a Tor network relay.     

Universities answering this call to defend private access to an uncensored web will receive prizes while helping millions of people around the world and providing students and faculty a vital learning experience. 

“Journalists, political and social activists, attorneys, businesspeople, and other users all over the world rely on Tor for unfettered, unmonitored access to knowledge and communications,” EFF Senior Staff Technologist Cooper Quintin said. “Anonymous speech always has been a pillar of democratic society, letting us discuss anything without fear of retribution. And facilitating this discussion can be a great educational opportunity for students and faculty alike.” 

Made up of volunteer-run relays, the Tor network allows human rights defenders and organizations, at-risk communities, and people experiencing online censorship or government surveillance to browse the unrestricted internet with as much privacy and anonymity as possible. A Tor relay is a computer that’s a part of the anonymization process; a Tor bridge is a relay that’s not publicly listed, in order to circumvent censorship in countries that block IP addresses of known relays.      

Currently, roughly 7,000 relays and 2,000 bridges help make up the global network by simply donating a spare computer, bandwidth, and time. Universities already volunteering Tor relays include the Massachusetts Institute of Technology, Georgetown University, Carnegie Mellon University, Technical University Berlin, University of Cambridge, and others.  

University-run relays provide students with hands-on cybersecurity experience in a real environment helping real people, while stimulating discussion about global policy, law and society, particularly regarding free speech issues. It can help build community between students and faculty, as well as advance research and recruitment. 

Universities are ideal sites for hosting Tor relays as they tend to have good network connectivity, lots of technical expertise to run relays—including professors, students, and IT teams—and generally value freedom of thought and expression. By running a Tor relay, universities can directly promote themselves as defenders of intellectual freedom and vanguards against censorship. 

To learn more about the Tor University Challenge:

To learn more about Tor: 

Contact:  CooperQuintinSenior Staff
Josh Richman

It's Summer Security Week at EFF

1 month 2 weeks ago

Let me be frank: it’s always security week at EFF! But this week is extra special. EFF will take flight to the Las Vegas hacker summer camp conferences—BSidesLV, Black Hat USA and DEF CON 31—to rally behind computer security researchers and tinkerers. Whether you're on the ground in Vegas or online, I hope you'll support the digital rights movement as an EFF member this week.

EFF’s activists, technologists, and lawyers fight so you can use technology on your own terms. Wrongheaded tech policies endanger your rights to communicate privately and securely, and to express yourself creatively on the web. But you’ll help protect these rights for everyone when you become an EFF supporter.


Support internet freedom with a Gold Level membership and (for a short time!) you can choose EFF’s DEF CON 31 t-shirt design. Eagle eyes will discover the path to an online puzzle there. And our team would like to thank @aaronsteimle@0xCryptoK@detective_6, and jabberw0nky of the Muppet Liberation Front for collaborating on this member t-shirt and contributing a stellar puzzle design. Donate today or even set up a small automatic monthly donation.

I call artist Hannah Diaz’s design The Unkindness, a term for a gathering of ravens. But it also refers to the cruelty of corporations and governments that impose surveillance and censorship on people. We can flock together and fight back through technology, policy, law, and yes, kindness. Help us build a better web when you join EFF.

Learn more about EFF's work at the Las Vegas summer security conferences! Check out this post for more information.

Aaron Jue

EFF at Las Vegas Hacker Summer Camp

1 month 2 weeks ago

The EFF team is pleased to return to the Las Vegas hacker summer camp conferences—BSidesLV, Black Hat USA and DEF CON 31—to rally behind computer security researchers and tinkerers. This entire week of events is a meaningful opportunity to reconnect with the infosec and hacker community, who are crucial to building privacy and safety on the web for everyone. Below we've rounded up all of EFF's scheduled talks and activities at the conferences.

As in past years, EFF staff attorneys will be present to help support speakers and attendees. If you have legal concerns regarding an upcoming talk or sensitive infosec research that you are conducting at any time, please email Outline the basic issues and we will do our best to connect you with the resources you need. Read more about EFF's work defending, offering legal counsel, and publicly advocating for technologists on our Coders' Rights Project page.

EFF staff members will be on hand in the expo areas of all three conferences. You may encounter us in the wild elsewhere, but we hope you stop by the EFF tables talk to us about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 31 shirts available! These shirts have a puzzle incorporated into the design. Try your hand at cracking it!

EFF Staff Presentations

Ask the EFF Panel at Black Hat USA
EFF's Associate Director of Community Organizing Rory Mir; Staff Attorney Hannah Zhao; and Staff Attorney Mario Trujillo.
Thursday, August 10 from 11:20 AM - 12:00 PM | Mandalay Bay Convention Center, South Pacific I, Level 0 (North Hall)

UN Conventional Cybercrime: How a Bad Anti-Hacking Treaty is Becoming Law
EFF Policy Director for Global Privacy - Katitza Rodriguez & EFF Senior Staff Technologist - Bill Budington
Thursday, August 10 @ 11 AM | DEF CON 31 - War Stories @ Forum

The Hackers, The Lawyers, And The Defense Fund
EFF Surveillance Litigation Director - Hannah Zhao
Friday, August 11 @ 9:30 AM | DEF CON 31 - Track 3 Forum

Tracking the Worlds Dumbest Cyber-Mercenaries
EFF Senior Staff Technologist - Cooper Quintin
Friday, August 11 @ 2 PM | DEF CON 31 - War Stories @ Harrah's

Ask the EFF @ DEF CON 31
EFF Legal Director - Corynne McSherry; EFF Staff Attorney - Hannah Zhao; EFF Staff Attorney - Mario Trujillo; EFF Associate Director of Community Organizing - Rory Mir; EFF Senior Staff Technologist - Cooper Quintin
Friday, August 11 @ 8 PM | DEF CON 31 - Track 3 Forum

Abortion Access in the Age of Digital Surveillance
EFF Legal Director - Daly Barnett, Corynne McSherry & India McKinney (with Kate Bertash, Digital Defense Fund)
Saturday, August 12 @ 4:30 PM | DEF CON 31 - Track 3 Forum

EFF Benefit Poker Tournament at DEF CON 31

We’re going all in on internet freedom. Take a break from hacking the Gibson to face off with your competition at the tables—and benefit EFF! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Play for glory. Play for money. Play for the future of the web. You can even come cheer on our players at the Horseshoe Poker Room on Friday at 12:00.

Tech Trivia Contest at DEF CON 31

Join us for some tech trivia on Saturday, August 12 at 6 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Cup and EFF swag pack. The second and third place teams will also win great EFF gear.

Aaron Jue

Your Computer Should Say What You Tell It To Say

1 month 2 weeks ago
WEI? I’m a frayed knot

Two pieces of string walk into a bar.

The first piece of string asks for a drink.

The bartender says, “Get lost. We don’t serve pieces of string.”

The second string ties a knot in his middle and messes up his ends. Then he orders a drink. 

The bartender says, “Hey, you aren’t a piece of string, are you?”

The piece of string says, “Not me! I'm a frayed knot.”

Google is adding code to Chrome that will send tamper-proof information about your operating system and other software, and share it with websites. Google says this will reduce ad fraud. In practice, it reduces your control over your own computer, and is likely to mean that some websites will block access for everyone who's not using an "approved" operating system and browser. It also raises the barrier to entry for new browsers, something Google employees acknowledged in an unofficial explainer for the new feature, Web Environment Integrity (WEI).

If you’re scratching your head at this point, we don’t blame you. This is pretty abstract! We’ll unpack it a little below - and then we’ll explain why this is a bad idea that Google should not pursue

But first…

Some background

When your web browser connects to a web server, it automatically sends a description of your device and browser, something like, "This session is coming from a Google Pixel 4, using Chrome version 116.0.5845.61." The server on the other end of that connection can request even more detailed information, like a list of which fonts are installed on your device, how big its screen is, and more. 

This can be good. The web server that receives this information can tailor its offerings to you. That server can make sure it only sends you file formats your device understands, at a resolution that makes sense for your screen, laid out in a way that works well for you.

But there are also downsides to this. Many sites use "browser fingerprinting" - a kind of tracking that relies on your browser's unique combination of characteristics - to nonconsensually identify users who reject cookies and other forms of surveillance. Some sites make inferences about you from your browser and device in order to determine whether they can charge you more, or serve you bad or deceptive offers

Thankfully, the information your browser sends to websites about itself and your device is strictly voluntary. Your browser can send accurate information about you, but it doesn't have to. There are lots of plug-ins, privacy tools and esoteric preferences that you can use to send information of your choosing to sites that you don't trust. 

These tools don't just let you refuse to describe your computer to nosy servers across the internet. After all, a service that has so little regard for you that it would use your configuration data to inflict harms on you might very well refuse to serve you at all, as a means of coercing you into giving up the details of your device and software.

Instead, privacy and anti-tracking tools send plausible, wrong information about your device. That way, services can't discriminate against you for choosing your own integrity over their business models.

That's where remote attestation comes in.

Secure computing and remote attestation

Most modern computers, tablets and phones ship from the factory with some kind of "secure computing" capability. 

Secure computing is designed to be a system for monitoring your computer that you can't modify, or reconfigure. Originally, secure computing relied on a second processor - a "Trusted Platform Module" or TPM - to monitor the parts of your computer you directly interact with. These days, many devices use a "secure enclave" - a hardened subsystem that is carefully designed to ensure that it can only be changed with the manufacturer’s permission..

These security systems have lots of uses. When you start your device, they can watch the boot-up process and check each phase of it to ensure that you're running the manufacturer's unaltered code, and not a version that's been poisoned by malicious software. That's great if you want to run the manufacturer's code, but the same process can be used to stop you from intentionally running different code, say, a free/open source operating system, or a version of the manufacturer's software that has been altered to disable undesirable features (like surveillance) and/or enable desirable ones (like the ability to install software from outside the manufacturer's app store).

Beyond controlling the code that runs on your device, these security systems can also provide information about your hardware and software to other people over the internet. Secure enclaves and TPMs ship with cryptographic "signing keys." They can gather information about your computer - its operating system version, extensions, software, and low-level code like bootloaders - and cryptographically sign all that information in an "attestation."

These attestations change the balance of power when it comes to networked communications. When a remote server wants to know what kind of device you're running and how it's configured, that server no longer has to take your word for it. It can require an attestation.

Assuming you haven't figured out how to bypass the security built into your device's secure enclave or TPM, that attestation is a highly reliable indicator of how your gadget is set up. 

What's more, altering your device's TPM or secure enclave is a legally fraught business. Laws like Section 1201 of the Digital Millennium Copyright Act as well as patents and copyrights create serious civil and criminal jeopardy for technologists who investigate these technologies. That danger gets substantially worse when the technologist publishes findings about how to disable or bypass these secure features. And if a technologist dares to distribute tools to effect that bypass, they need to reckon with serious criminal and civil legal risks, including multi-year prison sentences.

WEI? No way!

This is where the Google proposal comes in. WEI is a technical proposal to let servers request remote attestations from devices, with those requests being relayed to the device's secure enclave or TPM, which will respond with a cryptographically signed, highly reliable description of your device. You can choose not to send this to the remote server, but you lose the ability to send an altered or randomized description of your device and its software if you think that's best for you.

In their proposal, the Google engineers claim several benefits of such a scheme. But, despite their valiant attempts to cast these benefits as accruing to device owners, these are really designed to benefit the owners of commercial services; the benefit to users comes from the assumption that commercial operators will use the additional profits from remote attestation to make their services better for their users.

For example, the authors say that remote attestations will allow site operators to distinguish between real internet users who are manually operating a browser, and bots who are autopiloting their way through the service. This is said to be a way of reducing ad-fraud, which will increase revenues to publishers, who may plow those additional profits into producing better content. 

They also claim that attestation can foil “machine-in-the-middle” attacks, where a user is presented with a fake website into which they enter their login information, including one-time passwords generated by a two-factor authentication (2FA) system, which the attacker automatically enters into the real service’s login screen. 

They claim that gamers could use remote attestation to make sure the other gamers they’re playing against are running unmodified versions of the game, and not running cheats that give them an advantage over their competitors.

They claim that giving website operators the power to detect and block browser automation tools will let them block fraud, such as posting  fake reviews or mass-creating bot accounts.

There’s arguably some truth to all of these claims. That’s not unusual: in matters of security, there’s often ways in which indiscriminate invasions of privacy and compromises of individual autonomy would blunt some real problems. 

Putting handcuffs on every shopper who enters a store would doubtless reduce shoplifting, and stores with less shoplifting might lower their prices, benefitting all of their customers. But ultimately, shoplifting is the store’s problem, not the shoppers’, and it’s not fair for the store to make everyone else bear the cost of resolving its difficulties.

WEI helps websites block disfavored browsers

One section of Google’s document acknowledges that websites will use WEI to lock out browsers and operating systems that they dislike, or that fail to implement WEI to the website’s satisfaction. Google tentatively suggests (“we are evaluating”) a workaround: even once Chrome implements the new technology, it would refuse to send WEI information from a “small percentage” of computers that would otherwise send it. In theory, any website that refuses visits from non-WEI browsers would wind up also blocking this “small percentage” of Chrome users, who would complain so vociferously that the website would have to roll back their decision and allow everyone in, WEI or not.

The problem is, there are lots of websites that would really, really like the power to dictate what browser and operating system people can use. Think “this website works best in Internet Explorer 6.0 on Windows XP.” Many websites will consider that “small percentage” of users an acceptable price to pay, or simply instruct users to reset their browser data until a roll of the dice enables WEI for that site.

Also, Google has a conflict of interest in choosing the “small percentage.” Setting it very small would benefit Google’s ad fraud department by authenticating more ad clicks, allowing Google to sell those ads at a higher price. Setting it high makes it harder for websites to implement exclusionary behavior, but doesn’t directly benefit Google at all. It only makes it easier to build competing browsers. So even if Google chooses to implement this workaround, their incentives are to configure it as too small to protect the open web.

You are the boss of your computer

Your computer belongs to you. You are the boss of it. It should do what you tell it to. 

We live in a wildly imperfect world. Laws that prevent you from reverse-engineering and reconfiguring your computer are bad enough, but when you combine that with a monopolized internet of “five giant websites filled with screenshots of text from the other four,” things can get really bad.

A handful of companies have established chokepoints between buyers and sellers, performers and audiences, workers and employers, as well as families and communities. When those companies refuse to deal with you, your digital life grinds to a halt. 

The web is the last major open platform left on the internet - the last platform where anyone can make a browser or a website and participate, without having to ask permission or meet someone else’s specifications.

You are the boss of your computer. If a website sets up a virtual checkpoint that says, “only approved technology beyond this point,” you should have the right to tell it, “I’m no piece of string, I’m a frayed knot.” That is, you should be able to tell a site what it wants to hear, even if the site would refuse to serve you if it knew the truth about you. 

To their credit, the proposers of WEI state that they would like for WEI to be used solely for benign purposes. They explicitly decry the use of WEI to block browsers, or to exclude users for wanting to keep their private info private.

But computer scientists don't get to decide how a technology gets used. Adding attestation to the web carries the completely foreseeable risk that companies will use it to attack users' right to configure their devices to suit their needs, even when that conflicts with tech companies' commercial priorities. 

WEI shouldn't be made. If it's made, it shouldn't be used. 

So what?

So what should we do about WEI and other remote attestation technologies?

Let's start with what we shouldn't do. We shouldn't ban remote attestation. Code is speech and everyone should be free to study, understand, and produce remote attestation tools.

These tools might have a place within distributed systems - for example, voting machine vendors might use remote attestation to verify the configuration of their devices in the field. Or at-risk human rights workers might send remote attestations to trusted technologists to help determine whether their devices have been compromised by state-sponsored malware.

But these tools should not be added to the web. Remote attestations have no place on open platforms. You are the boss of your computer, and you should have the final say over what it tells other people about your computer and its software. 

Companies' problems are not as important as their users' autonomy

We sympathize with businesses whose revenues might be impacted by ad-fraud, game companies that struggle with cheaters, and services that struggle with bots. But addressing these problems can’t come before  the right of technology users to choose how their computers work, or what those computers tell others about them, because the right to control one’s own devices is a building block of all civil rights in the digital world.. 

An open web delivers more benefit than harm. Letting giant, monopolistic corporations overrule our choices about which technology we want to use, and how we want to use it, is a recipe for solving those companies' problems, but not their users'.

Cory Doctorow

EFF to 9th Circuit: App Stores Shouldn’t Be Liable for Processing Payments for User Content

1 month 2 weeks ago

EFF filed a brief this week in the U.S. Court of Appeals for the Ninth Circuit arguing that app stores should not be liable for user speech just because they recommend that speech or process payments for those users. Those stores should be protected by Section 230, a law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. Absent Section 230 immunity in these contexts, the platforms would be forced to censor user speech to mitigate their legal exposure.

The case is actually three consolidated cases where the plaintiffs sued the leading app stores: Google Play, Apple’s AppStore, and Facebook. The plaintiffs’ claims relate to the app stores offering “social casino” apps, where users can buy virtual gambling chips with real money but can’t ever cash out any chips they win. The plaintiffs argue that these apps amount to illegal gambling. The app stores not only offer and promote these social casino apps, they also facilitate the in-app purchases (also called microtransactions) for the virtual gambling chips.

At issue on appeal is the part of Section 230 that provides immunity to internet websites, apps, and services when they are sued for user-generated content. Section 230 is the foundational internet law that has, since 1996, created legal breathing room for online intermediaries (and their users) to host or share third-party content. Online speech is largely mediated by these private companies, allowing all of us to speak online, access information, and engage in commerce, without requiring that we have loads of money or technical skills.

In this case, the plaintiffs are arguing that Section 230 should not apply to the app stores for promoting or recommending the social casino apps, nor for facilitating the in-app purchases for virtual gambling chips. Both the apps and the chips are types of third-party content.

The district court rightly ruled that Section 230 does apply to the app stores’ promoting or recommending the social casino apps within their platforms. In our brief we urged the Ninth Circuit to affirm this holding. This case gives the court another bite at the apple to hold that Section 230 applies to online intermediaries that recommend content created by others, after its opinion in Gonzalez v. Google was vacated by the U.S. Supreme Court earlier this year.

If platforms lost Section 230 immunity for recommending user-generated content, they would cease to offer recommendations, harming users’ ability to find the content they want. Or the platforms would censor any third-party content that might pose a legal risk should the content be swept up in the platforms’ recommendation algorithms, harming user speech in the process—both the ability to share and to access content.  

However, the district court erred when it ruled that the app stores do not have Section 230 immunity for facilitating the purchase of virtual gambling chips within the social casino apps. In our brief we urged the Ninth Circuit to reverse the district court on this issue. We argued that a rule that exposes online intermediaries to potential liability for facilitating a financial transaction related to unlawful user-generated content would have huge implications beyond the app stores.

The plaintiffs argue that the app stores could preserve their Section 230 immunity by simply refusing to process in-app purchases. But banning the easiest purchasing method would degrade the user experience in online stores—and not just in the three large stores sued here. The plaintiffs’ position fails to recognize that other platforms don’t have such a choice. Etsy, for example, facilitates purchases of virtual art, while Patreon enables artists to be supported by “membership” fees. If platforms like these were to lose Section 230 immunity and thereby be exposed to potential liability simply because they process payments for user-generated content, their entire business models would be threatened, ultimately harming users’ ability to share and access online speech.

Sophia Cope

The Impending Privacy Threat of Self-Driving Cars

1 month 2 weeks ago

Within a few years, fully self-driving cars have gone from science fiction to a very common reality for people in San Francisco with other places in the U.S. also testing the new technology. With innovations often come unintended consequences—one of which is the massive collection of data required for an autonomous vehicle to function. The sheer amount of visual and other information collected by a fleet of cars traveling down public streets conjures the threat of the possibility for peoples’ movements to be tracked, aggregated, and retained by companies, law enforcement, or bad actors—including vendor employees. The sheer mass of this information poses a potential threat to civil liberties and privacy for pedestrians, commuters, and any other people that rely on public roads and walkways in cities.

People’s aggregate movements–their commutes, visits to friends or loved ones, and trips to the doctor’s office or an attorney– could be compiled over time by a fleet of driverless vehicles, which pedestrians don’t suspect can be deputized by police.

Autonomous vehicles rely on more than a dozen cameras and sensors situated around the car in order to detect other vehicles, traffic signs, obstructions, and pedestrians. Because the most visible autonomous cars are operated by private companies, there is a lot that we do not know about the storage, security, and access regarding this footage. It is unclear, for instance, how detailed the footage is of pedestrians on the street or whether that footage is run through any image recognition. What capabilities do these vehicles have to collect audio? How long is this footage stored for? Who has access to it? What protections are in place to keep the footage private and safe? How do these companies comply with local and state-wide privacy laws like the California Consumer Privacy Act?

Another major line of questioning is the relationship between autonomous vehicles and law enforcement agencies. Bloomberg found at least nine warrants served to a self-driving car company in both San Francisco and Maricopa County, Arizona. According to a training document received by Vice in 2022, the San Francisco Police Department wrote: “Autonomous vehicles are recording their surroundings continuously and have potential to help with investigative leads...investigations has already done this several times.”

It is imperative that as more self-driving cars occupy our city streets, collecting vast quantities of data, that we have strong privacy laws that address both the personal data that the cars process and police access to that data. We also need a better understanding of how much footage police request access to and when, if ever, companies that operate autonomous vehicles will push back against overly broad requests. It is also essential that we learn whether police are given historic footage or real-time live access to peer through the cameras on the vehicles.

In the coming years, cities and regulators will have to have difficult choices when it comes to how autonomous vehicles should be able to safely operate. It is imperative that, in addition to pedestrians and driver safety, regulators consider the civil liberties implications for the tremendous amount of data and footage collected by these self-driving cars.

Matthew Guariglia

Celebrating Ten Years of Encrypting the Web with Let’s Encrypt

1 month 2 weeks ago

Ten years ago, the web was a very different place. Most websites didn’t use HTTPS to protect your data. As a result, snoops could read emails or even take over accounts by stealing cookies. But a group of determined researchers and technologists from EFF and the University of Michigan were dreaming of a better world: one where every web page you visited was protected from spying and interference. Meanwhile, another group at Mozilla was working on the same dream. Those dreams led to the creation of Let’s Encrypt and tools like EFF’s Certbot, which simplify protecting websites and make browsing the web safer for everyone. 

There was one big obstacle: to deploy HTTPS and protect a website, the people running that website needed to buy and install a certificate from a certificate authority. Price was a big barrier to getting more websites on HTTPS, but the complexity of installing certificates was an even bigger one.  

In 2013, the Internet Security Research Group (ISRG) was founded, which would soon become the home of Let’s Encrypt, a certificate authority founded to help encrypt the Web. Let’s Encrypt was radical in that it provided certificates for free to anyone with a website. Let’s Encrypt also introduced a way to automate away the risk and drudgery of manually issuing and installing certificates. With the new ACME protocol, anyone with a website could run software (like EFF’s Certbot) that combined the steps of getting a certificate and correctly installing it. 

In the time since, Let’s Encrypt and Certbot have been a huge success, with over 250 million active certificates protecting hundreds of millions of websites.

This is a huge benefit to everyone’s online security and privacy. When you visit a website that uses HTTPS, your data is protected by encryption in transit, so nobody but you and the website operator gets to see it. That also prevents snoops from making a copy of your login cookies and taking over accounts.

The most important measure of Let’s Encrypt’s and Certbot’s successes is how much of people’s daily web browsing uses HTTPS. According to Firefox data, 78% of pages loaded use HTTPS. That’s tremendously improved from 27% in 2013 when Let’s Encrypt was founded. There’s still a lot of work to be done to get to 100%. We hope you’ll join EFF and Let’s Encrypt in celebrating the successes of ten years encrypting the web, and the anticipation of future growth and safety online. 

Jacob Hoffman-Andrews

Facebook Apparently Will Ask for Consent Before Showing Behavioral Ads to Some Users

1 month 2 weeks ago

For many years now, EFF has argued that pervasive online behavioral surveillance, which powers the exploitative data broker industry as well as some of the largest online tech companies, should be banned. Companies should voluntarily make these changes to benefit their users, but EFF also strongly supports legislation that would require businesses to get consumers’ opt-in consent before collecting and processing this private behavioral data. Such legislation has stalled in the U.S. However, years after the General Data Protection Regulation, was made law in the European Union, Meta (Facebook)—one of the largest collectors of behavioral data in the world—has announced what could be a major step toward ending its behavioral advertising without opt-in consent.

This is, of course, not Meta’s choice. They sidestepped the GDPR using Terms of Service trickery for as long as they could. Later, Meta bypassed legal constraints by arguing that the personalization of content and advertising was necessary to provide an agreed-upon service to users. When this became untenable, they circumvented the consent requirement by asserting that the company had a legitimate interest in showing targeted ads.

While we welcome this shift, the company deserves few accolades

But as they write in today’s announcement, recent court interpretations of the GDPR, as well as the incoming Digital Markets Act (DMA), have forced their hand. 

Implementation Matters 

Meta’s announcement states that in the EU, the European Economic Area, and Switzerland, the company will “change the legal basis that [it uses] to process certain data for behavioural advertising… from ‘Legitimate Interests’ to ‘Consent’.” In practice, it’s not clear what this means yet. The company says that “advertisers will still be able to run personalised advertising campaigns to reach potential customers,” and it must consult with the regulators on implementation. Perhaps Meta will say that it can keep doing what it has been doing on grounds of an implausible claim that users have already consented. A more straightforward interpretation of the law and court rulings would require the company to turn OFF behavioral data collection for affected users, and only turn it back on if users have been given clear consent options, and make an informed and voluntary choice to have their data collected. 

While we welcome this shift, the company deserves few accolades. Meta fought these laws. Now that it’s lost that battle, it is only making this change for its European users because it otherwise would likely face significant new fines (on top of a $1.3 billion fine already levied against it for another GDPR violation). 

Opt-in consent to collect, retain, disclose, or use a person’s data is at the core of the GDPR. It’s good to see the law having a (potentially) significant impact, even if it’s been seven years since it passed. Given how long it took for the GDPR to have this impact, lawmakers in the rest of the world must act swiftly to pass their own comprehensive consumer privacy legislation

Meta says it will need time to discuss these changes with regulators, and it will need three months or longer to let users choose whether to allow the company to use behavioral ads. Until we know how the company plans to ask for that consent, and how it will interpret those answers, we should remain cautious about declaring victory.

Jason Kelley

UN Cybercrime Convention Negotiations Enter Final Phase With Troubling Surveillance Powers Still on the Table

1 month 3 weeks ago

This is Part II  in EFF’s ongoing series about the proposed UN Cybercrime Convention. Read Part I for a quick snapshot of the ins and outs of the zero draft;  Part III for a deep dive on Chapter V regarding international cooperation: the historical context, the zero draft's approach, scope of cooperation, and protection of personal data., and Part IV, which deals with the criminalization of security research.

As one of the last negotiating sessions to finalize the UN Cybercrime Convention approaches, it’s important to remember that the outcome and implications of the international talks go well beyond the UN meeting rooms in Vienna and New York. Representatives from over 140 countries around the globe with widely divergent law enforcement practices, including Iran, Russia,  Saudi Arabia, China, Brazil, Chile, Switzerland, New Zealand, Kenya, Germany, Canada, the U.S., Peru, and Uruguay, have met over the last year to push their positions on what the draft convention should say about across border police powers, access to private data, judicial oversight of prosecutorial practices, and other thorny issues.

As we noted in Part I of this post about the zero draft of the draft convention now on the table, the final text will result in the rewriting of criminal and surveillance laws around the world, as Member States work into their legal frameworks the agreed upon requirements, authorizations, and protections. Millions of people, including those often in the crosshairs of governments for defending human rights and advocating for free expression, will be affected. That’s why we and our international allies have been fighting for users to ensure the draft convention includes robust human rights protections.

Going into the sixth negotiating session, which begins August 21 in New York, the outcome of the talks remains uncertain. A variety of issues are still unresolved, and the finalization of the intricate text faces approaching deadlines. The foundational principle of the negotiations—“nothing is agreed until everything is agreed upon”—underscores the complexity and delicate nature of these discussions. Every element of the draft convention is interrelated, and the resolution of one aspect hinges on the consonance of all other areas of the text.

In Part I, we shared our initial takeaways about the zero draft—some bad provisions were dropped, but ambiguous and overly broad spying powers to investigate any crime, potentially including speech-related crimes, remain. In Part II we’ll delve deeper into one of the convention’s most concerning provisions: domestic surveillance powers.

These provisions endow governments with extensive surveillance powers but only offer weak checks and balances to prevent potential law enforcement overreach. States could misuse such powers by misrepresenting protected speech as cybercrime and exploiting the broad scope of these spying powers beyond their initial purpose. While we successfully advocated for the exclusion of many of the most problematic non-cybercrimes from the draft convention's criminalization section, the draft still permits law enforcement to collect and share data for the investigation of any crime, including content-related crimes, rather than limiting these powers to core cybercrimes. The existing human rights obligations under Article 5, although a positive inclusion, are not sufficiently robust. Combined with the draft’s inherent ambiguity, this could lead to the abusive or disproportionate misapplication of these domestic surveillance powers.

Below, we’ll talk about the domestic spying chapter of the draft convention:

Criminal Procedural Measures (Chapter IV, Article 23)

Article 23 of this chapter expands the scope of the surveillance powers chapter in concerning ways. It describes procedures for dealing not just with specified cybercrimes (those in Articles 6-16), but also for the collection of electronic evidence related to any type of crime, regardless of its severity or whether it is connected to a computer system. This expansion means that the domestic spying powers can be used to investigate any crime, from cybercrimes like hacking to traditional non-cybercrime offenses like drug trafficking—even speech crimes in some jurisdictions, such as insulting a monarch—as long as there's digital evidence involved.

Moreover, Article 23 doesn’t clearly stipulate that the powers established should be used only for specific and targeted criminal investigations or proceedings; though the draft convention’s wording doesn’t explicitly compel service providers to indiscriminately retain data for mass surveillance or fishing expeditions, it does not clearly prevent it. 

New Domestic Spying Powers

The draft chapter on criminal procedural measures introduces six domestic spying powers (expedited preservation of stored data, expedited preservation and partial disclosure of traffic data, production order, search and seizure of stored data, real-time collection of traffic data, and interception of content data) with far-reaching implications for human rights. These powers, if misused or applied overly broadly, hold the potential for serious intrusions into peoples’ private lives. Personal data can reveal a person’s detailed private information, such as contacts, browsing history, location, device details, and patterns of behavior. For instance, a series of web searches, visits, and phone calls can reveal someone's medical condition. Access to personal data represents a significant interference with privacy rights and must be handled with necessary safeguards, including prior judicial authorization, transparency, notification, remedies, time limits, and oversight.

This is why States need to ensure robust, detailed safeguards in the draft convention. EFF and more than 400 NGOs have led the way, introducing in 2014 the Necessary and Proportionate Principles, a set of guidelines that serve as a blueprint on how to apply human rights law to communications surveillance. In court briefings and other material, we’ve discussed the sensitivity of communications data and the application of these principles, including metadata and subscriber data. But, in its current form, the draft convention’s human rights safeguards are simply not robust enough to protect against the misuse of personal data. Several critical requirements are noticeably absent.

Human Rights Safeguards (Articles 5 and 24)

The draft convention has two articles on human rights: a general provision, under Article 5, that applies to the whole draft convention, and Article 24, which describes the conditions and safeguards applied to the new domestic surveillance powers. We believe that the current version of Article 24 fails to provide the robust safeguards needed to protect human rights and to curb law enforcement overreach, and should be revised. 

For instance, it should require prior approval by a judge, set time limits, and give targets a remedy if they are wrongfully spied on. It should also require authorities to explain specifically what facts justify intercepting particular people’s communications, and why. And it should require governments to tell the public how often these powers were used. We’ve been fighting for years around the world for mandatory safeguards like these, for example through amicus briefs in court cases, for example, our joint amicus with ARTICLE19 and Privacy International, as well as through the Necessary and Proportionate Principles.

Article 24 also only mentions the principle of proportionality, omitting other equally important principles, such as legality and necessity. It leaves it up to States to decide what kind of oversight is “appropriate in view of the nature of” each spying power, potentially allowing States to claim that some powers don’t require significant limitations or oversight. There are no specific minimum safeguards or minimum rights for targets of surveillance, not even for the most intrusive surveillance provisions authorizing real-time collection of data or interception of content. 

Meanwhile, Article 5 says the “implementation” of their obligations under the new draft convention shall be “consistent with [states’] obligations under international human rights law.” While that sounds good, it can also be taken to apply only to human rights treaties that a given state has already ratified, providing a big loophole for states like China, Bhutan, Brunei, Holy See, Saudi Arabia, Oman, Palau, Niue, Myanmar, Malaysia, Kiribati, Singapore, South Sudan, Tonga, United Arab Emirates, and Cuba that have failed to sign or ratify the main human rights rules such as the International Covenant on Civil and Political Rights (ICCPR) that other states have ratified. 

In the face of opposition from some states to the safeguards in Articles 5 and 24, and even proposals to delete Article 5, it's critical to consider what's really at stake. Importantly, these Articles are not duplicative nor overarching, but apply strictly to the provisions within the convention itself, offering a crucial scope to these powers. The integration of human rights into the draft convention isn't a surrender of sovereignty but an acknowledgment of states’ shared responsibility to uphold our common human dignity. These rights, universal in nature, set the baseline standards for all to live free from injustice and persecution. It's not just about semantics or legal technicalities, the inclusion of these Articles is pivotal to ensure that the draft UN Cybercrime Convention, throughout all its chapters, embodies states’ commitments to protect and uphold human rights.

In prior sessions, some states argued for the elimination of Article 5 or the consolidation of protections into a single article, but we fervently believe that reinforcing Article 5 is paramount. Most alarmingly, unlike the Consolidated Negotiating Document (CND)—an earlier draft with all states’ positions—Article 24 of the zero draft no longer applies to the international cooperation chapter. This issue is deeply concerning as it opens the door to "jurisdictional exploitation," where a state could exploit these varying domestic safeguards standards to conduct surveillance activities in a less safeguarded jurisdiction that would be considered illegal on its own. By bolstering the universal safeguards standards as proposed in Articles 5 and 24, such potential abuses could be effectively mitigated.

The current version of the draft convention, lacking robust safeguards, risks enabling discretionary or prejudiced use of surveillance powers. Our new joint submission with Privacy International for the upcoming sixth negotiation session has proposed explicit safeguards (in our updated version of Article 24) wherever new powers are granted to law enforcement. We also believe in the need for service providers to deny data requests based on human rights concerns. The necessity to adopt our suggested changes to Article 24 and reinforce Article 5 cannot be overstated. These modifications would clear ambiguities and ensure universally recognized human rights are enforced both within the draft convention and globally. Regrettably, the Budapest Convention lacks a provision similar to Article 5 of the zero draft, which applies to all chapters of the draft UN convention. This is concerning, given that most democratic states have been usually proposing text based on language that already exists in the Budapest Convention itself, and hardly making any modifications.

Definitions Matter (Article 2)

An additional concern arises from the recurring and undefined term "competent authorities," found throughout the zero draft. Notably, the six powers listed above include a similar provision, stating “Each state party shall adopt such legislative and other measures as may be necessary to enable its competent authorities.” However, the term is not yet included in Article 2 of the zero draft, which lists term definitions. In the context of the Budapest Convention, this term has been broadly defined to include a variety of entities, such as prosecutors or the police themselves, who lack impartiality and independence. If the concept of “competent authorities” is copy-pasted verbatim from the Budapest Convention, it risks replicating its shortcomings. This is due to the significant potential for overreach and misuse of powers by authorities who are neither impartial nor independent, raising serious concerns.

Interception of Content Data (Article 30)

This power allows for the real-time interception of content data, granting law enforcement the ability to monitor communications as they occur. Article 30 calls for State Parties to adopt legislative measures empowering “competent authorities” to collect or record content data in real-time for “a range of serious criminal offenses to be determined by domestic law”—with no restriction to cybercrimes. Some states’ notion of “serious offenses” can include speech about politics or religion, or criticism of the government or public officials. It also mandates cooperation from service providers to assist in data collection or recording “within [their] existing technical capabilities.” Furthermore, service providers (which could be any kind of communications intermediary, such as an ISP, social network, or cloud provider) can be ordered to keep the interception confidential.

As we mentioned above, this kind of intrusive surveillance power should require prior approval by a judge, set time limits for interception, and provide targets an effective remedy if they are wrongfully spied on. Our new joint submission with Privacy International for the upcoming sixth negotiation session asks for some of these safeguards to be made explicit and for these powers to be limited to the particular cybercrimes in Articles 6-16 defined by the draft convention.

When the draft convention talks about “content,” it refers to “the substance or purport” of the communication, “such as text, voice messages, audio recordings, video recordings, and other types of information.” We understand that “other types of information” might include things like images or files attached to an email. We’d like to see more clarity, for example, on the treatment of web browsing, to make sure the pages someone visits on a website, like Wikipedia articles, are considered content. This information is potentially sensitive as it can reveal details about a person’s interests and even beliefs. We’ve argued for years that the distinction between content and non-content is obsolete for evaluating the intrusiveness of surveillance and the sensitivity of private information. Quite a lot of privacy questions hinge on this distinction, which often can no longer bear the weight that’s placed on it. (See “ Changing  Technology and Definitions” in the Necessary and Proportionate Principles for one example.) We understand that numerous laws have used these concepts, often following the Budapest Convention, so moving beyond them remains challenging, but we continue fighting for stronger protections for sensitive information of all types.

One way to safeguard this sensitive data in the draft convention is by strengthening Article 24 to expressly apply principles and safeguards uniformly to every kind of surveillance power. But if that’s not possible, we should still provide strong protections like those described above.

Real-time Collection of Traffic Data is Also Highly Intrusive (Article 29)

The collection of traffic data in real time provides an understanding of communication patterns and connections between different entities. This is also a highly intrusive power. Given the sensitive nature of real-time traffic data, which could encompass individuals' locations, relationships, browsing habits, and communication patterns, the adoption of such measures must be handled with utmost care. 

Based on the definition of traffic data under Article 2, it could be argued to include location information. Traffic data refers includes data used to trace and identify the source and destination of a communication, as well as data about the location of the device used for communication, along with the date, time, duration, and type of the communication.

This power can help law enforcement agencies map out networks of cybercriminals. However, the privacy implications are substantial if this power is not strictly limited nor data highly safeguarded, as it could be used to track innocent individuals' online behavior, including their physical location. Such information reveals whether people were in the same place at the same time, and whom they do or don’t communicate with under particular circumstances, and allows the mapping out of their personal relationships and specific locations over a period of time.

Under this Article, state parties must pass domestic laws that authorize their competent authorities to collect or record traffic data in real-time, or compel service providers to do this for them where the providers are able. We’re calling for these powers to be available only for the cybercrimes defined under Articles 6-16 of the draft convention, or at most for serious crimes. In any case, the safeguards we propose under Article 24 should apply to this power, including the need to obtain prior judicial authorization. 

Search and Seizure of Stored Data (Article 28)

This provision allows authorities to search and seize data stored on a computer system, including personal devices. Under this search power, authorities can look through someone's computer or other digital devices, find the data they need, and seize or secure it.

Article 28.4 requires Parties to put in place laws or other measures enabling their competent authorities to order anyone with knowledge about how a particular computer or device works to provide information necessary to search that computer or device. This could involve understanding the device itself, its network, any security measures protecting its data, or other aspects of its operation. This is similar to language already in the Budapest Convention that technologists have been concerned about for years because it might be read to compel an unwilling engineer to help break a security system. In the worst case, it might be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

PI and EFF strongly recommend Article 28.4 be removed in its entirety. If it stays in, the drafters should include material in the explanatory memorandum that accompanies the draft Convention to clarify limits on compelling technologists to reveal confidential information or do work on behalf of law enforcement. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices. Somewhat akin to language in the Budapest Convention, Article 28.3(d) also allows law enforcement to obtain a warrant to delete data from a device.

Production Order (Article 27)

This power comes in two parts. Article 27(a) involves compelling someone to turn over stored data that already exists, typically stored data such as users’ messages, emails, cloud data, or even online backups of people’s devices. Article 27(b) involves compelling service providers to turn over subscriber information, such as the identity or contact information of a particular user.

Subscriber information is often treated as less sensitive and is less stringently protected than other kinds of data, but it’s the most common means by which law enforcement identifies a person associated with some online activity. In some states, governments have sought indiscriminate access to this data (and hence to the ability to identify people) without individualized suspicion. In other cases, it can be turned over voluntarily with no legal process. The ability to put names and addresses to online information is immediately connected with the power to intimidate and repress dissent, and causes people to doubt that they can do anything at all online without having their name readily turned over to the government. These scenarios underscore the need for the draft convention to require legal standards for all kinds of surveillance, including disclosure of subscriber identities.

Courts in many places have recognized the importance of online anonymity and the harm to online speech that results from making it too easy for people’s identities to be exposed. We should ensure that the draft convention doesn’t compel a low standard for identifying online speakers, or preclude courts from adopting a higher standard in the future.


As we approach the final negotiation session of the United Nations Cybercrime Convention, it's important to underscore the need for strengthening the human rights safeguards under Articles 5 and 24 to ensure power is met with accountability rather than tilt the scale in favor of weak safeguards that do not sufficiently curb overreach. 

The draft convention negotiations represent an understated global battle for our digital rights and freedoms. The conclusion of these talks will deeply influence the digital lives of billions of people worldwide. Hence, States must strive to guarantee that the resulting accord doesn't encroach upon our human rights, but rather fortifies and augments them. Digital rights are human rights, and our digital future hinges on the decisions and actions of negotiators. 

Katitza Rodriguez

The White House Acknowledges the Pressure on Section 702, But Much More Reform is Needed

1 month 3 weeks ago

We've seen months of continued public confirmation that Americans’ privacy is being violated by surveillance under Section 702, and widespread criticism from civil society, activists, surveillance-skeptical bipartisan congressional committees, and even the overly timid Privacy and Civil Liberties Oversight Board. Now, a separate  group of White House-appointed experts just flinched on renewing the law without reforms. It’s good to see even the White House signal that improvements are needed, but it was immediately undermined by the tininess of their proposed changes. 

The White House might suddenly be willing to acknowledge that people in the U.S. are sick of having their digital communications harvested and accessible to domestic law enforcement without a warrant, but the review group’s proposed reforms are just a cheap political consolation prize that will do very little to restore the fundamental right to privacy that has been denied to people on U.S. soil who email or call friends or family abroad. 

For example, the 42–page report recommends that the FBI no longer be allowed to search 702 databases when investigating non-national security related crimes. In its current iteration, the FBI is permitted to sift through international communications in hopes of finding evidence of a wide range of crimes in the U.S.-based side of digital conversations. By our rough, most generous calculation, that would eliminate only  0.01% of these so-called FBI backdoor searches (about 16/119,000 backdoor searches according to the latest intelligence community transparency report). We deserve more than these tiny baby steps. 

Still, it’s a start, and should be a call to action for all of us to keep on the pressure.  Despite pushback against the authority, the White House had signaled earlier this summer that it was going to strongly defend Section 702, including all of its  more controversial domestic uses. 

After a bipartisan panel of Congressional committee members signaled that they would be willing to let Section 702 sunset entirely, the White House appears to be slinking from its earlier commitment to unchanged mass surveillance in order to propose the smallest basic reforms imaginable. The report also recommends more training for FBI agents (as if that will change the FBI’s habit of breaking the law), a bit more transparency, attempting to create a “culture of compliance” within the FBI, and various other bureaucratic checks and balances that historically have meant very little to the operations of mass surveillance in the U.S.

This is not a time to let up. If this tiny step signals anything to us, it is that the White House and the intelligence community are getting nervous about the volume of our protests. Come December, we will not be satisfied by perfunctory reforms or research grants to study the harms of aimless surveillance programs. The White House flinched. That means we need to get louder. 

Correction: An earlier version of this post may have led some readers to believe the Privacy and Civil Liberties Oversight Board issued this report. They did not, a separate group of experts issued it

Matthew Guariglia

Government Needs Both the Ability to Talk to Social Media Platforms and Clear Limits, EFF Argues in Brief to Appellate Court

1 month 3 weeks ago
EFF Filed Its Missouri v. Biden Amicus Brief to the U.S. Court of Appeals for the Fifth Circuit

SAN FRANCISCO—Government input into social media platforms’ decisions about user content raises serious First Amendment concerns and the government must be held accountable for violations, but not all such communications are improper, Electronic Frontier Foundation (EFF) argued in an appellate brief filed today. 

“Government co-option of the content moderation systems of social media companies is a serious threat to freedom of speech,” the brief notes, although “there are clearly times when it is permissible, appropriate, and even good public policy for government agencies and officials to non-coercively communicate with social media companies about the user speech they publish on their sites.” 

EFF filed the amicus brief to the U.S. Court of Appeals for the Fifth Circuit in Missouri v. Biden, a lawsuit brought by Louisiana, Missouri, and several individuals alleging that federal government agencies and officials illegally pushed social media platforms to censor content about COVID safety measures and vaccines, elections, and Hunter Biden’s laptop, among other issues.   

Judge Terry A. Doughty of the U.S. District Court for the Western District of Louisiana sided with the plaintiffs, issuing a broad preliminary injunction July 4. The appellate court has stayed the injunction temporarily.  

EFF’s filed its brief on behalf of neither party, but rather to provide the appellate court with useful information about the competing interests involved and the context of social media platform content moderation in which they must be applied. 

Even the biggest, best-resourced social media companies struggle with content moderation, often frustrating users. In search of fairness and consistency in their decisions, social media companies need to draw on outside resources and expertise. This “networked governance” can include trusted flagger programs, trust and safety councils, or external stakeholder engagement teams, as well as as-needed consultations with individual and organizational experts including government agencies. 

Such government input does raise unique and worrisome First Amendment issues, but it can’t be forbidden entirely, the brief argues. 

“The distinction between proper and improper speech is often obscure, leaving ample gray area for courts reviewing such cases to grapple in. But grapple in it they must,” the brief says. “The district court did not adequately distinguish between improper and proper communications in either its analysis or preliminary injunction. The preliminary injunction is internally inconsistent with exceptions that seem to swallow many of its prohibitions. It does not provide adequate guidance to either the government or to anyone else seeking to hold the government to its proscriptions. This Court must independently review the record and make the searching distinctions that the district court did not.” 

For EFF’s brief

For more on Missouri v. Biden: 

Karen Gullo

Deja Vu: The FBI Proves Again It Can’t be Trusted with Section 702

1 month 3 weeks ago

We all deserve privacy in our communications, and part of that is trusting that the government will only access them within the limits of the law. But at this point, it’s crystal clear that the FBI doesn’t believe that either our rights nor the limitations that Congress has placed upon the bureau matter when it comes to the vast amount of information about us collected under FISA Section 702. 

How many times will the FBI get caught with their hand in the cookie jar of our constitutionally protected private communications without losing these invasive and unconstitutional powers?

The latest exhibit in this is in yet another newly declassified opinion of the Foreign Intelligence Surveillance Court (FISC). This opinion further reiterates what we already know, that the Federal Bureau of Investigation simply cannot be trusted with conducting foreign intelligence queries on American persons.  Regardless of the rules, or consistent FISC disapprovals, the FBI continues to act in a way that shows no regard for privacy and civil liberties.

According to the declassified FISC ruling, despite paper reforms which the FBI has touted that it put into place to respond to the last time it was caught violating U.S. law, the Bureau conducted four queries for the communications of a state senator and a U.S. senator.  And they did so without even meeting their own  already-inadequate standards for these kinds of searches.

How many times will the FBI get caught with their hand in the cookie jar of our constitutionally protected private communications without losing these invasive and unconstitutional powers?

Specifically, this disclosure concerns Section 702 of the 2008 Foreign Intelligence Surveillance Amendments Act, which authorizes the collection of overseas communications that can be queried by intelligence agencies in national security investigations under the oversight of the FISC. The FBI has access to the collected information, but only for limited purposes—purposes which it routinely and grossly oversteps.

Apart from the FBI’s apparent failure to even abide by its own rules, the bigger problem with this arrangement—even under the law—is that we live in a globalized world where U.S. persons regularly communicate with people in other countries. This creates a massive pool of digital communications in which one side of the conversation is an American on U.S. soil. The FBI, investigating crimes in the U.S., has spent the better part of 15 years sifting through these communications without even a warrant. So the fact that they cannot even abide by their own rules, much less the ones set by Congress, is a big deal.

But now we have a chance to close this unconstitutional loophole and block the FBI—or any other government agency—from searching any of our communications without a warrant. Section 702 is set to expire in December 2023.  Sadly, both the FBI and Biden Administration have signaled that they are all in when it comes to trying to keep open the FBI’s warrantless backdoor searches of 702 data. They like their hands fully in the cookie jar and at this point are likely confident that, even when they get caught, the FISC won’t take any serious steps to stop them.

But they won’t get that renewal without a fight. After several hearings in the House Judiciary Committee, it is clear that there is bipartisan support for the idea that Section 702 must drastically change, or else face termination (called sunsetting in DC) entirely. The Privacy and Civil Liberties Oversight Board (PCLOB), which has been unwilling to seriously take on 702 violations, even suggested before Congress that some bare minimum of changes should be made to the surveillance programs in order to protect the privacy rights of Americans.

While we think it’s time for 702 to end entirely, and for any future programs to start from scratch in protecting the privacy of digital communications. EFF will continue to fight to make sure that any bill that does renew Section 702 closes the government’s warrantless access to U.S. communications, minimizes the amount of data collected, and increases transparency. Anything less than that would signal a continued indifference, or contempt, to our right to privacy.

This recent disclosure proves, in a Groundhog Day-like fashion, that the FBI is not going to suddenly become good at self-control when it comes to access to our data. If the privacy of our communications—including communications with people abroad—is going to actually matter, Section 702 must be irrevocably changed or jettisoned entirely.

Matthew Guariglia

Maryland Supreme Court: Police Can’t Search Digital Data When Users Revoke Consent

1 month 3 weeks ago

This post was co-authored by EFF legal intern Virginia Kennedy

Under the Fourth Amendment, police can search your home, your computer, and other private spaces without a warrant or even probable cause if you freely and voluntarily consent to the search. But even when someone consents to a search, they should be able to change their mind. Say, for example, if a lawyer gives them better advice. But as a recent case from the Maryland Supreme Court demonstrates, searches of digital data stored on electronic devices raise unique questions about consent. If you consent to a search of your computer and police make a copy of the data on the computer, can they still examine that copy if you withdraw that consent? In State v. McDonnell, the Maryland Supreme Court sensibly answered no.

In June 2019, police officers visited Mr. McDonnell’s home and requested to search his home, computer, and phone as part of their investigation into the distribution of child pornography . Mr. McDonnell originally declined the search, but later signed a consent form allowing the agents to search his home and seize his phone and computer. The form included a clause stating that “I understand that I may withdraw my consent at any time.” After Mr. McDonnell’s electronics had been seized and their contents copied, but before the contents had been examined, Mr. McDonnell’s lawyer sent an email withdrawing consent to “the seizure of [Mr. McDonnell’s] laptop, or examination of its contents.” But agents searched the contents of the computer anyway. McDonnell moved to suppress the evidence that came from the search of his computer after he had revoked his consent.

It is incorrect to claim that a person lacks a reasonable expectation of privacy for the copy of computer data after they have revoked their consent

EFF and the National Association of Criminal Defense Lawyers filed an amicus brief in the Maryland Supreme Court arguing that law enforcement’s warrantless examination of the copy violated the Fourth Amendment. Specifically, we argued that regardless of the location, people have a heightened interest in their digital data and that the consent exception to the Fourth Amendment’s warrant requirement should reflect that heightened privacy interest. Thanks to the breakneck technological advancements in storage capabilities people are storing more and more sensitive information on their phones and computers. With much more ease, law enforcement can now access huge swaths of private information with a few clicks to aid in their investigations. And, of course, police often do so with very little judicial oversight. Ultimately, there is no difference between “computer data” and a “copy of computer data” to that data’s owner. Therefore, it is incorrect to claim that a person lacks a reasonable expectation of privacy for the copy of computer data after they have revoked their consent.

The Maryland Supreme Court unanimously agreed, holding that because Mr. McDonnell withdrew consent before the government examined the data, he did not lose his reasonable expectation of privacy in the data and that the government’s search violated the Fourth Amendment. Notably, the court found that Mr. McDonnell had a “privacy interest in the data itself,” even though he had legally lost a “possessory interest” in the copy by consenting to the copying. This holding closely follows the Maryland court’s ruling last year that although someone can lose ownership of a physical device by abandoning it, they do not necessarily abandon privacy in the device’s contents. The state argued that Mr. McDonnell retained no expectation of privacy for any copies that the government made with consent, analogizing copying digital data to photocopying a piece of paper . Thankfully, the Court disagreed, stating that, “[d]ata stored on electronic devices is both qualitatively and quantitatively different from physical analogues.” A better analogy, the court wrote, would be the “interruption of a consented-to search of a home by withdrawal of consent—police would have to promptly leave the home and seek a warrant, or other authorization, in order to further search.”

While the language of the form Mr. McDonnell signed in this case was not clear enough to grant the government permanent authorization to search the copy it made, the court declined to answer whether more unambiguous language could strip an individual of their ability to withdraw consent after a copy has been made. That’s unfortunate. To the extent police should ever be allowed to ask for consent to search, that consent must never be taken for granted or obtained through coercion. A consent form that deprived the signed of the right to change their mind the moment they signed would call into question the voluntariness of the consent itself. Although the court left this important question open, McDonnell is a welcome decision in a time of rampant data collection and little oversight over who can access it.

Andrew Crocker

The U.K. Government Is Very Close To Eroding Encryption Worldwide 

1 month 4 weeks ago

The U.K. Parliament is pushing ahead with a sprawling internet regulation bill that will, among other things, undermine the privacy of people around the world. The Online Safety Bill, now at the final stage before passage in the House of Lords, gives the British government the ability to force backdoors into messaging services, which will destroy end-to-end encryption. No amendments have been accepted that would mitigate the bill’s most dangerous elements. 


TELL the U.K. Parliament: Don't Break Encryption

If it passes, the Online Safety Bill will be a huge step backwards for global privacy, and democracy itself. Requiring government-approved software in peoples’ messaging services is an awful precedent. If the Online Safety Bill becomes British law, the damage it causes won’t stop at the borders of the U.K. 

The sprawling bill, which originated in a white paper on “online harms” that’s now more than four years old, would be the most wide-ranging internet regulation ever passed. At EFF, we’ve been clearly speaking about its disastrous effects for more than a year now. 

It would require content filtering, as well as age checks to access erotic content. The bill also requires detailed reports about online activity to be sent to the government. Here, we’re discussing just one fatally flawed aspect of OSB—how it will break encryption. 

An Obvious Threat To Human Rights

It’s a basic human right to have a private conversation. To have those rights realized in the digital world, the best technology we have is end-to-end encryption. And it’s utterly incompatible with the government-approved message-scanning technology required in the Online Safety Bill. 

This is because of something that EFF has been saying for years—there is no backdoor to encryption that only gets used by the “good guys.” Undermining encryption, whether by banning it, pressuring companies away from it, or requiring client side scanning, will be a boon to bad actors and authoritarian states.

The U.K. government wants to grant itself the right to scan every message online for content related to child abuse or terrorism—and says it will still, somehow, magically, protect peoples’ privacy. That’s simply impossible. U.K. civil society groups have condemned the bill, as have technical experts and human rights groups around the world

The companies that provide encrypted messaging—such as WhatsApp, Signal, and the UK-based Element—have also explained the bill’s danger. In an open letter published in April, they explained that OSB “could break end-to-end encryption, opening the door to routine, general and indiscriminate surveillance of personal messages of friends, family members, employees, executives, journalists, human rights activists and even politicians themselves.” Apple joined this group in June, stating publicly that the bill threatens encryption and “could put U.K. citizens at greater risk.” 

U.K. Government Says: Nerd Harder

In response to this outpouring of resistance, the U.K. government’s response has been to wave its hands and deny reality. In a response letter to the House of Lords seen by EFF, the U.K.’s Minister for Culture, Media and Sport simply re-hashes an imaginary world in which messages can be scanned while user privacy is maintained. “We have seen companies develop such solutions for platforms with end-to-end encryption before,” the letter states, a reference to client-side scanning. “Ofcom should be able to require” the use of such technologies, and where “off-the-shelf solutions” are not available, “it is right that the Government has led the way in exploring these technologies.” 

The letter refers to the Safety Tech Challenge Fund, a program in which the U.K. gave small grants to companies to develop software that would allegedly protect user privacy while scanning files. But of course, they couldn’t square the circle. The grant winners’ descriptions of their own prototypes clearly describe different forms of client-side scanning, in which user files are scoped out with AI before they’re allowed to be sent in an encrypted channel. 

The Minister completes his response on encryption by writing: 

We expect the industry to use its extensive expertise and resources to innovate and build robust solutions for individual platforms/services that ensure both privacy and child safety by preventing child abuse content from being freely shared on public and private channels.

This is just repeating a fallacy that we’ve heard for years: that if tech companies can’t create a backdoor that magically defends users, they must simply “nerd harder.” 

British Lawmakers Still Can And Should Protect Our Privacy

U.K. lawmakers still have a chance to stop their nation from taking this shameful leap forward towards mass surveillance. End-to-end encryption was not fully considered and voted on during either committee or report stage in the House of Lords. The Lords can still add a simple amendment that would protect private messaging, and specify that end-to-end encryption won’t be weakened or removed.

Earlier this month, EFF joined U.K. civil society groups and sent a briefing explaining our position to the House of Lords. The briefing explains the encryption-related problems with the current bill, and proposes the adoption of an amendment that will protect end-to-end encryption. If such an amendment is not adopted, those who pay the price will be “human rights defenders and journalists who rely on private messaging to do their jobs in hostile environments; and … those who depend on privacy to be able to express themselves freely, like LGBTQ+ people.” 

It’s a remarkable failure that the House of Lords has not even taken up a serious debate over protecting encryption and privacy, despite ample time to review every every section of the bill. 


TELL the U.K. Parliament: PROTECT Encryption—And our privacy

Finally, Parliament should reject this bill because universal scanning and surveillance is abhorrent to their own constituents. It is not what the British people want. A recent survey of U.K. citizens showed that 83% wanted the highest level of security and privacy available on messaging apps like Signal, WhatsApp, and Element. 

Documents related to the U.K. Online Safety Bill: 

Joe Mullin

Rights Groups Urge EU’s Thierry Breton: No Internet Shutdowns for Hateful Content

1 month 4 weeks ago

EFF and 66 human rights and free speech advocacy groups across the globe today called on EU Internal Commissioner Thierry Breton to clarify that the Digital Services Act (DSA)—new regulations aimed at reining in Big Tech companies that control the lion’s share of online speech worldwide—does not allow internet shutdowns to be used as a weapon to punish platforms for not removing “hateful content.”

Arbitrary blocking of online platforms for not following procedural safeguards to take down hate speech violates human rights under international law, the groups said in a letter to Breton asking him to clarify comments he made in a July 10 interview.  Platforms will be required to remove hateful content “immediately” or they will face “immediate sanctions” and be banned from operating “on our territory,” Breton said. The DSA imposes new legal requirements on TikTok, Instagram, and other very large social media platforms effective July 25.

In his comments on a radio station about recent riots in France, Breton, a former French minister, brought up the potential of restricting social media platforms under the DSA amidst existing civil turmoil in the nation.

Arbitrary blocking of online platforms and other forms of internet shutdowns are never a proportionate measure and impose disastrous consequences for people’s safety and worsen the spread of misinformation, EFF and its international partners said in the letter.

With non-EU countries embracing DSA-like regulations, Breton’s comments, without the requested clarification, threaten to reinforce the weaponization of internet shutdowns around the world, and give cover to governments using arbitrary blocking to shroud violence and serious human rights abuse.

In the letter, civil society groups articulated the importance of a human-rights friendly implementation of the DSA. However, a recent French draft law on the regulation of the digital space requires browser-based website blocking, which is an unprecedented government censorship tool.

The letter is here

Karen Gullo

Electronic Frontier Foundation to Present Annual EFF Awards to Alexandra Asanovna Elbakyan, Library Freedom Project, and Signal Foundation

1 month 4 weeks ago
The 2023 EFF Awards will be presented in a live ceremony on Thursday, Sept. 14 in San Francisco.

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Alexandra Asanovna Elbakyan, Library Freedom Project, and Signal Foundation will receive the 2023 EFF Awards for their vital work in helping to ensure that technology supports freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law. 

Hosted by renowned science fiction author, activist, journalist, and EFF Special Advisor Cory Doctorow, the EFF Awards ceremony will start at 6:30 pm PT on Thursday, Sept. 14, 2023 at the Regency Lodge, 1290 Sutter St. in San Francisco. Guests can register at The ceremony will be recorded and video will be made available at a later date. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“The free flow of information and knowledge, as well as the privacy of our communications, are important pillars of an internet that advances freedom, justice, and innovation for all,” EFF Executive Director Cindy Cohn said. “This year’s EFF Award winners are tireless champions for these values and are helping build a world in which everyone can learn and speak freely and securely. They are an inspiration to us, as well as to people around the globe. We are honored to give them our thanks and some small part of the recognition they deserve.” 

Alexandra Asanovna Elbakyan — EFF Award for Access to Scientific Knowledge 

Kazakhstani computer programmer Alexandra Asanovna Elbakyan founded Sci-Hub in 2011 to provide free and unrestricted access to all scientific knowledge. Launched as a tool for providing quick access to articles from scientific journals, Sci-Hub has grown a database of more than 88.3 million research articles and books freely accessible for anyone to read and download; much of this knowledge otherwise would be hidden behind paywalls. Sci-Hub is used by millions of students, researchers, medical professionals, journalists, inventors, and curious people all over the world, many of whom provide feedback saying they are grateful for this access to knowledge. Some medical professionals have said Sci-Hub helps save human lives; some students have said they wouldn't be able to complete their education without Sci-Hub's help. Through Sci-Hub, Elbakyan has strived to shatter academic publishing’s monopoly-like mechanisms in which publishers charge high prices even though authors of articles in academic journals receive no payment. She has been targeted by many lawsuits and government actions, and Sci-Hub is blocked in some countries, yet she still stands tall for the idea that restricting access to information and knowledge violates human rights. 

Library Freedom Project — EFF Award for Information Democracy 

Library Freedom Project is radically rethinking the library professional organization by creating a network of values-driven librarian-activists taking action together to build information democracy. LFP offers trainings, resources, and community building for librarians on issues of privacy, surveillance, intellectual freedom, labor rights, power, technology, and more—helping create safer, more private spaces for library patrons to feed their minds and express themselves. Their work is informed by a social justice, feminist, anti-racist approach, and they believe in the combined power of long-term collective organizing and short-term, immediate harm reduction. 

Signal Foundation — EFF Award for Communications Privacy 

Since 2013, with the release of the unified app and the game-changing Signal Protocol, Signal has set the bar for private digital communications. With its flagship product, Signal Messenger, Signal provides real communications privacy, offering easy-to-use technology that refuses the surveillance business model on which the tech industry is built. To ensure that the public doesn't have to take Signal's word for it, Signal publishes their code and documentation openly, and licenses their core privacy technology to allow others to add privacy to their own products. Signal is also a 501(c)(3) nonprofit, ensuring that investors and market pressure never provides an incentive to weaken privacy in the name of money and growth. This allows Signal to stand firm against growing international legislative pressure to weaken online privacy, making it clear that end-to-end encryption either works for everyone or is broken for everyone—there is no half measure. 

To register for this event:

For past honorees: 

Josh Richman

FBI Seizure of Mastodon Server Data is a Wakeup Call to Fediverse Users and Hosts to Protect their Users

1 month 4 weeks ago

We’re in an exciting time for users who want to take back control from major platforms like Twitter and Facebook. However, this new environment comes with challenges and risks for user privacy, so we need to get it right and make sure networks like the Fediverse and Bluesky are mindful of past lessons.

In May, Mastodon server was compromised when one of the server’s admins had their home raided by the FBI for unrelated charges. All of their electronics, including a backup of the instance database, were seized.

It’s a chillingly familiar story which should serve as a reminder for the hosts, users, and developers of decentralized platforms: if you care about privacy, you have to do the work to protect it. We have a chance to do better from the start in the fediverse, so let’s take it.

A Fediverse Wake-up Call

A story where “all their electronics were seized” echoes many digital rights stories. EFF’s founding case over 30 years ago, Steve Jackson Games v. Secret Service, was in part a story about the overbroad seizures of equipment in the offices of Steve Jackson Games in Texas, based upon unfounded claims about illegal behavior in a 1990s version of a chat room. That seizure nearly drove the small games company out of business. It also spurred the newly-formed EFF into action. We won the case, but law enforcement's blunderbuss approach continues through today.

This overbroad police “seize it all” approach from the cops must change. EFF has long argued that seizing equipment like servers should only be done when it is relevant to an investigation. Any seized digital items that are not directly related to the search should be quickly returned, and copies of information should be deleted as soon as police know that it is unrelated—as they also should for nondigital items that they seize. EFF will continue to advocate for this in the courts and in Congress, and all of us should continue to demand it. 

Law enforcement must do better, even when they have a warrant (as they did here). But we can’t reasonably expect law enforcement to do the right thing every time, and we still have work to do to shift the law more firmly in the right direction. So this story should also be a wake-up call for the thousands of hosts in the growing decentralized web: you have to have your users’ backs too.

Why Protecting the Fediverse Matters

Protecting user privacy is a vital priority for the Fediverse. Many fediverse instances, such as Kolektiva, are focused on serving marginalized communities who are disproportionately targeted by law enforcement. Many were built to serve as a safe haven for those who too often find themselves tracked and watched by the police. Yet this raid put the thousands of users this instance served into a terrible situation. According to Kolektiva, the seized database, now in the FBI’s possession, includes personal information such as email addresses, hashed passwords, and IP addresses from three days prior to the date the backup was made. It also includes posts, direct messages, and interactions involving a user on the server. Because of the nature of the fediverse, this also implicates user messages and posts from other instances. 

To make matters worse, it appears that the admin targeted in the raid was in the middle of maintenance work which left would-be-encrypted material on the server available in unencrypted form at the time of seizure.  

Most users are unaware that, in general, once the government lawfully collects information, under various legal doctrines they can and do use it for investigating and prosecuting crimes that have nothing to do with the original purpose of the seizure. The truth is, once the government has the information, they often use it and the law supports this all too often. Defendants in those prosecutions could challenge the use of this data outside the scope of the original warrant, but that’s often cold comfort.

What is a decentralized server host to do?  

EFF’s “Who Has Your Back”  recommendations for protecting your users when the government comes knocking aren’t just for large centralized platforms. Hosts of decentralized networks must include possibilities like government seizure in their threat model and be ready to respond in ways that stand with their users.

First of all, basic security practices that apply to any server exposed to the internet also apply to Mastodon. Use firewalls and limit user access to the server as well as the database. If you must keep access logs, keep them only for a reasonable amount of time and review them periodically to make sure you’re only collecting what you need. This is true more broadly: to the extent possible, limit the data your server collects and stores, and only store data for as long as it is necessary. Also stay informed about possible security threats in the Mastodon code, and update your server when new versions are released.

Second, make sure that you’ve adopted policies and practices to protect your users, including clear and regular transparency reports about law enforcement attempts to access user information and policies about what you will do if the cops show up – things like requiring a warrant for content, and fighting gag orders. Critically, that should include a promise to notify your users as soon as possible about any law enforcement action where law enforcement gained access to their information and communications. EFF’s Who Has Your Back pages go into detail about these and other key protections. EFF also prepared a legal primer for fediverse hosts to consider.

In Kolektiva’s case, hosts were fairly slow in giving notice. The raid occurred in mid-May and the notice didn’t come until June 30, about six weeks later. That’s quite a long delay, even if it took Kolektiva a while to realize the full impact of the raid.  As a host of other people’s communications, it is vital to give notice as soon as you are able, as you generally have no way of knowing how much risk this information poses to your users and must assume the worst. The extra notice to users is vital for them to take any necessary steps to protect themselves.

What can users do?

For users joining the fediverse, you should evaluate the about page for a given server, to see what precautions (if any) they outline. Once you’ve joined, you can take advantage of the smaller scale of community on the platform, and raise these issues directly with admin and other users on your instance. Insist that the obligations from Who has Your Back, including to notify you and to resist law enforcement demands where possible, be included in the instance information and terms of service. Making these commitments binding in the terms of service is not only a good idea, it can help the host fight back against overbroad law enforcement requests and can support later motions by defendants to exclude the evidence.

Another benefit of the fediverse, unlike the major lock-in platforms, is that if you don’t like their answer, you can easily find and move to a new instance. However, since most servers in this new decentralized social web are hosted by enthusiasts, users should approach these networks mindful of privacy and security concerns. This means not using these services for sensitive communications, being aware of the risks of social network mapping, and taking some additional precautions when necessary like using a VPN or Tor, and a temporary email address.

What can developers do?

While it would not have protected all of the data seized by the FBI in this case, end-to-end encryption of direct messages is something that has been regrettably absent from Mastodon for years, and would at least have protected the most private content likely to have been on the Kolektiva server. There have been some proposals to enable this functionality, and developers should prioritize finding a solution. 

The Kolektiva raid should be an important alarm bell for everyone hosting decentralized content. Police raids and seizures can be difficult to predict, even when you’ve taken a lot of precautions. EFF’s Who Has Your Back recommendations and, more generally, our Legal Primer for User Generated Content and the Fediverse should be required reading. And making sure you have your users’ backs should be a founding principle for every server in the fediverse. 

Update: This post's title has been updated to clarify that the FBI seized Mastodon server data, not control over the server itself.

Cindy Cohn
1 hour 32 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed