It's Summer Security Week at EFF

1 month 3 weeks ago

Let me be frank: it’s always security week at EFF! But this week is extra special. EFF will take flight to the Las Vegas hacker summer camp conferences—BSidesLV, Black Hat USA and DEF CON 31—to rally behind computer security researchers and tinkerers. Whether you're on the ground in Vegas or online, I hope you'll support the digital rights movement as an EFF member this week.

EFF’s activists, technologists, and lawyers fight so you can use technology on your own terms. Wrongheaded tech policies endanger your rights to communicate privately and securely, and to express yourself creatively on the web. But you’ll help protect these rights for everyone when you become an EFF supporter.

JOIN EFF

Support internet freedom with a Gold Level membership and (for a short time!) you can choose EFF’s DEF CON 31 t-shirt design. Eagle eyes will discover the path to an online puzzle there. And our team would like to thank @aaronsteimle@0xCryptoK@detective_6, and jabberw0nky of the Muppet Liberation Front for collaborating on this member t-shirt and contributing a stellar puzzle design. Donate today or even set up a small automatic monthly donation.

I call artist Hannah Diaz’s design The Unkindness, a term for a gathering of ravens. But it also refers to the cruelty of corporations and governments that impose surveillance and censorship on people. We can flock together and fight back through technology, policy, law, and yes, kindness. Help us build a better web when you join EFF.

Learn more about EFF's work at the Las Vegas summer security conferences! Check out this post for more information.

Aaron Jue

EFF at Las Vegas Hacker Summer Camp

1 month 3 weeks ago

The EFF team is pleased to return to the Las Vegas hacker summer camp conferences—BSidesLV, Black Hat USA and DEF CON 31—to rally behind computer security researchers and tinkerers. This entire week of events is a meaningful opportunity to reconnect with the infosec and hacker community, who are crucial to building privacy and safety on the web for everyone. Below we've rounded up all of EFF's scheduled talks and activities at the conferences.

As in past years, EFF staff attorneys will be present to help support speakers and attendees. If you have legal concerns regarding an upcoming talk or sensitive infosec research that you are conducting at any time, please email info@eff.org. Outline the basic issues and we will do our best to connect you with the resources you need. Read more about EFF's work defending, offering legal counsel, and publicly advocating for technologists on our Coders' Rights Project page.

EFF staff members will be on hand in the expo areas of all three conferences. You may encounter us in the wild elsewhere, but we hope you stop by the EFF tables talk to us about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 31 shirts available! These shirts have a puzzle incorporated into the design. Try your hand at cracking it!

EFF Staff Presentations

Ask the EFF Panel at Black Hat USA
EFF's Associate Director of Community Organizing Rory Mir; Staff Attorney Hannah Zhao; and Staff Attorney Mario Trujillo.
Thursday, August 10 from 11:20 AM - 12:00 PM | Mandalay Bay Convention Center, South Pacific I, Level 0 (North Hall)

UN Conventional Cybercrime: How a Bad Anti-Hacking Treaty is Becoming Law
EFF Policy Director for Global Privacy - Katitza Rodriguez & EFF Senior Staff Technologist - Bill Budington
Thursday, August 10 @ 11 AM | DEF CON 31 - War Stories @ Forum

The Hackers, The Lawyers, And The Defense Fund
EFF Surveillance Litigation Director - Hannah Zhao
Friday, August 11 @ 9:30 AM | DEF CON 31 - Track 3 Forum

Tracking the Worlds Dumbest Cyber-Mercenaries
EFF Senior Staff Technologist - Cooper Quintin
Friday, August 11 @ 2 PM | DEF CON 31 - War Stories @ Harrah's

Ask the EFF @ DEF CON 31
EFF Legal Director - Corynne McSherry; EFF Staff Attorney - Hannah Zhao; EFF Staff Attorney - Mario Trujillo; EFF Associate Director of Community Organizing - Rory Mir; EFF Senior Staff Technologist - Cooper Quintin
Friday, August 11 @ 8 PM | DEF CON 31 - Track 3 Forum

Abortion Access in the Age of Digital Surveillance
EFF Legal Director - Daly Barnett, Corynne McSherry & India McKinney (with Kate Bertash, Digital Defense Fund)
Saturday, August 12 @ 4:30 PM | DEF CON 31 - Track 3 Forum

EFF Benefit Poker Tournament at DEF CON 31

We’re going all in on internet freedom. Take a break from hacking the Gibson to face off with your competition at the tables—and benefit EFF! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Play for glory. Play for money. Play for the future of the web. You can even come cheer on our players at the Horseshoe Poker Room on Friday at 12:00.

Tech Trivia Contest at DEF CON 31

Join us for some tech trivia on Saturday, August 12 at 6 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Cup and EFF swag pack. The second and third place teams will also win great EFF gear.

Aaron Jue

Your Computer Should Say What You Tell It To Say

1 month 3 weeks ago
WEI? I’m a frayed knot

Two pieces of string walk into a bar.

The first piece of string asks for a drink.

The bartender says, “Get lost. We don’t serve pieces of string.”

The second string ties a knot in his middle and messes up his ends. Then he orders a drink. 

The bartender says, “Hey, you aren’t a piece of string, are you?”

The piece of string says, “Not me! I'm a frayed knot.”

Google is adding code to Chrome that will send tamper-proof information about your operating system and other software, and share it with websites. Google says this will reduce ad fraud. In practice, it reduces your control over your own computer, and is likely to mean that some websites will block access for everyone who's not using an "approved" operating system and browser. It also raises the barrier to entry for new browsers, something Google employees acknowledged in an unofficial explainer for the new feature, Web Environment Integrity (WEI).

If you’re scratching your head at this point, we don’t blame you. This is pretty abstract! We’ll unpack it a little below - and then we’ll explain why this is a bad idea that Google should not pursue

But first…

Some background

When your web browser connects to a web server, it automatically sends a description of your device and browser, something like, "This session is coming from a Google Pixel 4, using Chrome version 116.0.5845.61." The server on the other end of that connection can request even more detailed information, like a list of which fonts are installed on your device, how big its screen is, and more. 

This can be good. The web server that receives this information can tailor its offerings to you. That server can make sure it only sends you file formats your device understands, at a resolution that makes sense for your screen, laid out in a way that works well for you.

But there are also downsides to this. Many sites use "browser fingerprinting" - a kind of tracking that relies on your browser's unique combination of characteristics - to nonconsensually identify users who reject cookies and other forms of surveillance. Some sites make inferences about you from your browser and device in order to determine whether they can charge you more, or serve you bad or deceptive offers

Thankfully, the information your browser sends to websites about itself and your device is strictly voluntary. Your browser can send accurate information about you, but it doesn't have to. There are lots of plug-ins, privacy tools and esoteric preferences that you can use to send information of your choosing to sites that you don't trust. 

These tools don't just let you refuse to describe your computer to nosy servers across the internet. After all, a service that has so little regard for you that it would use your configuration data to inflict harms on you might very well refuse to serve you at all, as a means of coercing you into giving up the details of your device and software.

Instead, privacy and anti-tracking tools send plausible, wrong information about your device. That way, services can't discriminate against you for choosing your own integrity over their business models.

That's where remote attestation comes in.

Secure computing and remote attestation

Most modern computers, tablets and phones ship from the factory with some kind of "secure computing" capability. 

Secure computing is designed to be a system for monitoring your computer that you can't modify, or reconfigure. Originally, secure computing relied on a second processor - a "Trusted Platform Module" or TPM - to monitor the parts of your computer you directly interact with. These days, many devices use a "secure enclave" - a hardened subsystem that is carefully designed to ensure that it can only be changed with the manufacturer’s permission..

These security systems have lots of uses. When you start your device, they can watch the boot-up process and check each phase of it to ensure that you're running the manufacturer's unaltered code, and not a version that's been poisoned by malicious software. That's great if you want to run the manufacturer's code, but the same process can be used to stop you from intentionally running different code, say, a free/open source operating system, or a version of the manufacturer's software that has been altered to disable undesirable features (like surveillance) and/or enable desirable ones (like the ability to install software from outside the manufacturer's app store).

Beyond controlling the code that runs on your device, these security systems can also provide information about your hardware and software to other people over the internet. Secure enclaves and TPMs ship with cryptographic "signing keys." They can gather information about your computer - its operating system version, extensions, software, and low-level code like bootloaders - and cryptographically sign all that information in an "attestation."

These attestations change the balance of power when it comes to networked communications. When a remote server wants to know what kind of device you're running and how it's configured, that server no longer has to take your word for it. It can require an attestation.

Assuming you haven't figured out how to bypass the security built into your device's secure enclave or TPM, that attestation is a highly reliable indicator of how your gadget is set up. 

What's more, altering your device's TPM or secure enclave is a legally fraught business. Laws like Section 1201 of the Digital Millennium Copyright Act as well as patents and copyrights create serious civil and criminal jeopardy for technologists who investigate these technologies. That danger gets substantially worse when the technologist publishes findings about how to disable or bypass these secure features. And if a technologist dares to distribute tools to effect that bypass, they need to reckon with serious criminal and civil legal risks, including multi-year prison sentences.

WEI? No way!

This is where the Google proposal comes in. WEI is a technical proposal to let servers request remote attestations from devices, with those requests being relayed to the device's secure enclave or TPM, which will respond with a cryptographically signed, highly reliable description of your device. You can choose not to send this to the remote server, but you lose the ability to send an altered or randomized description of your device and its software if you think that's best for you.

In their proposal, the Google engineers claim several benefits of such a scheme. But, despite their valiant attempts to cast these benefits as accruing to device owners, these are really designed to benefit the owners of commercial services; the benefit to users comes from the assumption that commercial operators will use the additional profits from remote attestation to make their services better for their users.

For example, the authors say that remote attestations will allow site operators to distinguish between real internet users who are manually operating a browser, and bots who are autopiloting their way through the service. This is said to be a way of reducing ad-fraud, which will increase revenues to publishers, who may plow those additional profits into producing better content. 

They also claim that attestation can foil “machine-in-the-middle” attacks, where a user is presented with a fake website into which they enter their login information, including one-time passwords generated by a two-factor authentication (2FA) system, which the attacker automatically enters into the real service’s login screen. 

They claim that gamers could use remote attestation to make sure the other gamers they’re playing against are running unmodified versions of the game, and not running cheats that give them an advantage over their competitors.

They claim that giving website operators the power to detect and block browser automation tools will let them block fraud, such as posting  fake reviews or mass-creating bot accounts.

There’s arguably some truth to all of these claims. That’s not unusual: in matters of security, there’s often ways in which indiscriminate invasions of privacy and compromises of individual autonomy would blunt some real problems. 

Putting handcuffs on every shopper who enters a store would doubtless reduce shoplifting, and stores with less shoplifting might lower their prices, benefitting all of their customers. But ultimately, shoplifting is the store’s problem, not the shoppers’, and it’s not fair for the store to make everyone else bear the cost of resolving its difficulties.

WEI helps websites block disfavored browsers

One section of Google’s document acknowledges that websites will use WEI to lock out browsers and operating systems that they dislike, or that fail to implement WEI to the website’s satisfaction. Google tentatively suggests (“we are evaluating”) a workaround: even once Chrome implements the new technology, it would refuse to send WEI information from a “small percentage” of computers that would otherwise send it. In theory, any website that refuses visits from non-WEI browsers would wind up also blocking this “small percentage” of Chrome users, who would complain so vociferously that the website would have to roll back their decision and allow everyone in, WEI or not.

The problem is, there are lots of websites that would really, really like the power to dictate what browser and operating system people can use. Think “this website works best in Internet Explorer 6.0 on Windows XP.” Many websites will consider that “small percentage” of users an acceptable price to pay, or simply instruct users to reset their browser data until a roll of the dice enables WEI for that site.

Also, Google has a conflict of interest in choosing the “small percentage.” Setting it very small would benefit Google’s ad fraud department by authenticating more ad clicks, allowing Google to sell those ads at a higher price. Setting it high makes it harder for websites to implement exclusionary behavior, but doesn’t directly benefit Google at all. It only makes it easier to build competing browsers. So even if Google chooses to implement this workaround, their incentives are to configure it as too small to protect the open web.

You are the boss of your computer

Your computer belongs to you. You are the boss of it. It should do what you tell it to. 

We live in a wildly imperfect world. Laws that prevent you from reverse-engineering and reconfiguring your computer are bad enough, but when you combine that with a monopolized internet of “five giant websites filled with screenshots of text from the other four,” things can get really bad.

A handful of companies have established chokepoints between buyers and sellers, performers and audiences, workers and employers, as well as families and communities. When those companies refuse to deal with you, your digital life grinds to a halt. 

The web is the last major open platform left on the internet - the last platform where anyone can make a browser or a website and participate, without having to ask permission or meet someone else’s specifications.

You are the boss of your computer. If a website sets up a virtual checkpoint that says, “only approved technology beyond this point,” you should have the right to tell it, “I’m no piece of string, I’m a frayed knot.” That is, you should be able to tell a site what it wants to hear, even if the site would refuse to serve you if it knew the truth about you. 

To their credit, the proposers of WEI state that they would like for WEI to be used solely for benign purposes. They explicitly decry the use of WEI to block browsers, or to exclude users for wanting to keep their private info private.

But computer scientists don't get to decide how a technology gets used. Adding attestation to the web carries the completely foreseeable risk that companies will use it to attack users' right to configure their devices to suit their needs, even when that conflicts with tech companies' commercial priorities. 

WEI shouldn't be made. If it's made, it shouldn't be used. 

So what?

So what should we do about WEI and other remote attestation technologies?

Let's start with what we shouldn't do. We shouldn't ban remote attestation. Code is speech and everyone should be free to study, understand, and produce remote attestation tools.

These tools might have a place within distributed systems - for example, voting machine vendors might use remote attestation to verify the configuration of their devices in the field. Or at-risk human rights workers might send remote attestations to trusted technologists to help determine whether their devices have been compromised by state-sponsored malware.

But these tools should not be added to the web. Remote attestations have no place on open platforms. You are the boss of your computer, and you should have the final say over what it tells other people about your computer and its software. 

Companies' problems are not as important as their users' autonomy

We sympathize with businesses whose revenues might be impacted by ad-fraud, game companies that struggle with cheaters, and services that struggle with bots. But addressing these problems can’t come before  the right of technology users to choose how their computers work, or what those computers tell others about them, because the right to control one’s own devices is a building block of all civil rights in the digital world.. 

An open web delivers more benefit than harm. Letting giant, monopolistic corporations overrule our choices about which technology we want to use, and how we want to use it, is a recipe for solving those companies' problems, but not their users'.

Cory Doctorow

EFF to 9th Circuit: App Stores Shouldn’t Be Liable for Processing Payments for User Content

1 month 4 weeks ago

EFF filed a brief this week in the U.S. Court of Appeals for the Ninth Circuit arguing that app stores should not be liable for user speech just because they recommend that speech or process payments for those users. Those stores should be protected by Section 230, a law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. Absent Section 230 immunity in these contexts, the platforms would be forced to censor user speech to mitigate their legal exposure.

The case is actually three consolidated cases where the plaintiffs sued the leading app stores: Google Play, Apple’s AppStore, and Facebook. The plaintiffs’ claims relate to the app stores offering “social casino” apps, where users can buy virtual gambling chips with real money but can’t ever cash out any chips they win. The plaintiffs argue that these apps amount to illegal gambling. The app stores not only offer and promote these social casino apps, they also facilitate the in-app purchases (also called microtransactions) for the virtual gambling chips.

At issue on appeal is the part of Section 230 that provides immunity to internet websites, apps, and services when they are sued for user-generated content. Section 230 is the foundational internet law that has, since 1996, created legal breathing room for online intermediaries (and their users) to host or share third-party content. Online speech is largely mediated by these private companies, allowing all of us to speak online, access information, and engage in commerce, without requiring that we have loads of money or technical skills.

In this case, the plaintiffs are arguing that Section 230 should not apply to the app stores for promoting or recommending the social casino apps, nor for facilitating the in-app purchases for virtual gambling chips. Both the apps and the chips are types of third-party content.

The district court rightly ruled that Section 230 does apply to the app stores’ promoting or recommending the social casino apps within their platforms. In our brief we urged the Ninth Circuit to affirm this holding. This case gives the court another bite at the apple to hold that Section 230 applies to online intermediaries that recommend content created by others, after its opinion in Gonzalez v. Google was vacated by the U.S. Supreme Court earlier this year.

If platforms lost Section 230 immunity for recommending user-generated content, they would cease to offer recommendations, harming users’ ability to find the content they want. Or the platforms would censor any third-party content that might pose a legal risk should the content be swept up in the platforms’ recommendation algorithms, harming user speech in the process—both the ability to share and to access content.  

However, the district court erred when it ruled that the app stores do not have Section 230 immunity for facilitating the purchase of virtual gambling chips within the social casino apps. In our brief we urged the Ninth Circuit to reverse the district court on this issue. We argued that a rule that exposes online intermediaries to potential liability for facilitating a financial transaction related to unlawful user-generated content would have huge implications beyond the app stores.

The plaintiffs argue that the app stores could preserve their Section 230 immunity by simply refusing to process in-app purchases. But banning the easiest purchasing method would degrade the user experience in online stores—and not just in the three large stores sued here. The plaintiffs’ position fails to recognize that other platforms don’t have such a choice. Etsy, for example, facilitates purchases of virtual art, while Patreon enables artists to be supported by “membership” fees. If platforms like these were to lose Section 230 immunity and thereby be exposed to potential liability simply because they process payments for user-generated content, their entire business models would be threatened, ultimately harming users’ ability to share and access online speech.

Sophia Cope

The Impending Privacy Threat of Self-Driving Cars

1 month 4 weeks ago

Within a few years, fully self-driving cars have gone from science fiction to a very common reality for people in San Francisco with other places in the U.S. also testing the new technology. With innovations often come unintended consequences—one of which is the massive collection of data required for an autonomous vehicle to function. The sheer amount of visual and other information collected by a fleet of cars traveling down public streets conjures the threat of the possibility for peoples’ movements to be tracked, aggregated, and retained by companies, law enforcement, or bad actors—including vendor employees. The sheer mass of this information poses a potential threat to civil liberties and privacy for pedestrians, commuters, and any other people that rely on public roads and walkways in cities.

People’s aggregate movements–their commutes, visits to friends or loved ones, and trips to the doctor’s office or an attorney– could be compiled over time by a fleet of driverless vehicles, which pedestrians don’t suspect can be deputized by police.

Autonomous vehicles rely on more than a dozen cameras and sensors situated around the car in order to detect other vehicles, traffic signs, obstructions, and pedestrians. Because the most visible autonomous cars are operated by private companies, there is a lot that we do not know about the storage, security, and access regarding this footage. It is unclear, for instance, how detailed the footage is of pedestrians on the street or whether that footage is run through any image recognition. What capabilities do these vehicles have to collect audio? How long is this footage stored for? Who has access to it? What protections are in place to keep the footage private and safe? How do these companies comply with local and state-wide privacy laws like the California Consumer Privacy Act?

Another major line of questioning is the relationship between autonomous vehicles and law enforcement agencies. Bloomberg found at least nine warrants served to a self-driving car company in both San Francisco and Maricopa County, Arizona. According to a training document received by Vice in 2022, the San Francisco Police Department wrote: “Autonomous vehicles are recording their surroundings continuously and have potential to help with investigative leads...investigations has already done this several times.”

It is imperative that as more self-driving cars occupy our city streets, collecting vast quantities of data, that we have strong privacy laws that address both the personal data that the cars process and police access to that data. We also need a better understanding of how much footage police request access to and when, if ever, companies that operate autonomous vehicles will push back against overly broad requests. It is also essential that we learn whether police are given historic footage or real-time live access to peer through the cameras on the vehicles.

In the coming years, cities and regulators will have to have difficult choices when it comes to how autonomous vehicles should be able to safely operate. It is imperative that, in addition to pedestrians and driver safety, regulators consider the civil liberties implications for the tremendous amount of data and footage collected by these self-driving cars.

Matthew Guariglia

Celebrating Ten Years of Encrypting the Web with Let’s Encrypt

2 months ago

Ten years ago, the web was a very different place. Most websites didn’t use HTTPS to protect your data. As a result, snoops could read emails or even take over accounts by stealing cookies. But a group of determined researchers and technologists from EFF and the University of Michigan were dreaming of a better world: one where every web page you visited was protected from spying and interference. Meanwhile, another group at Mozilla was working on the same dream. Those dreams led to the creation of Let’s Encrypt and tools like EFF’s Certbot, which simplify protecting websites and make browsing the web safer for everyone. 

There was one big obstacle: to deploy HTTPS and protect a website, the people running that website needed to buy and install a certificate from a certificate authority. Price was a big barrier to getting more websites on HTTPS, but the complexity of installing certificates was an even bigger one.  

In 2013, the Internet Security Research Group (ISRG) was founded, which would soon become the home of Let’s Encrypt, a certificate authority founded to help encrypt the Web. Let’s Encrypt was radical in that it provided certificates for free to anyone with a website. Let’s Encrypt also introduced a way to automate away the risk and drudgery of manually issuing and installing certificates. With the new ACME protocol, anyone with a website could run software (like EFF’s Certbot) that combined the steps of getting a certificate and correctly installing it. 

In the time since, Let’s Encrypt and Certbot have been a huge success, with over 250 million active certificates protecting hundreds of millions of websites.

This is a huge benefit to everyone’s online security and privacy. When you visit a website that uses HTTPS, your data is protected by encryption in transit, so nobody but you and the website operator gets to see it. That also prevents snoops from making a copy of your login cookies and taking over accounts.

The most important measure of Let’s Encrypt’s and Certbot’s successes is how much of people’s daily web browsing uses HTTPS. According to Firefox data, 78% of pages loaded use HTTPS. That’s tremendously improved from 27% in 2013 when Let’s Encrypt was founded. There’s still a lot of work to be done to get to 100%. We hope you’ll join EFF and Let’s Encrypt in celebrating the successes of ten years encrypting the web, and the anticipation of future growth and safety online. 

Jacob Hoffman-Andrews

Facebook Apparently Will Ask for Consent Before Showing Behavioral Ads to Some Users

2 months ago

For many years now, EFF has argued that pervasive online behavioral surveillance, which powers the exploitative data broker industry as well as some of the largest online tech companies, should be banned. Companies should voluntarily make these changes to benefit their users, but EFF also strongly supports legislation that would require businesses to get consumers’ opt-in consent before collecting and processing this private behavioral data. Such legislation has stalled in the U.S. However, years after the General Data Protection Regulation, was made law in the European Union, Meta (Facebook)—one of the largest collectors of behavioral data in the world—has announced what could be a major step toward ending its behavioral advertising without opt-in consent.

This is, of course, not Meta’s choice. They sidestepped the GDPR using Terms of Service trickery for as long as they could. Later, Meta bypassed legal constraints by arguing that the personalization of content and advertising was necessary to provide an agreed-upon service to users. When this became untenable, they circumvented the consent requirement by asserting that the company had a legitimate interest in showing targeted ads.

While we welcome this shift, the company deserves few accolades

But as they write in today’s announcement, recent court interpretations of the GDPR, as well as the incoming Digital Markets Act (DMA), have forced their hand. 

Implementation Matters 

Meta’s announcement states that in the EU, the European Economic Area, and Switzerland, the company will “change the legal basis that [it uses] to process certain data for behavioural advertising… from ‘Legitimate Interests’ to ‘Consent’.” In practice, it’s not clear what this means yet. The company says that “advertisers will still be able to run personalised advertising campaigns to reach potential customers,” and it must consult with the regulators on implementation. Perhaps Meta will say that it can keep doing what it has been doing on grounds of an implausible claim that users have already consented. A more straightforward interpretation of the law and court rulings would require the company to turn OFF behavioral data collection for affected users, and only turn it back on if users have been given clear consent options, and make an informed and voluntary choice to have their data collected. 

While we welcome this shift, the company deserves few accolades. Meta fought these laws. Now that it’s lost that battle, it is only making this change for its European users because it otherwise would likely face significant new fines (on top of a $1.3 billion fine already levied against it for another GDPR violation). 

Opt-in consent to collect, retain, disclose, or use a person’s data is at the core of the GDPR. It’s good to see the law having a (potentially) significant impact, even if it’s been seven years since it passed. Given how long it took for the GDPR to have this impact, lawmakers in the rest of the world must act swiftly to pass their own comprehensive consumer privacy legislation

Meta says it will need time to discuss these changes with regulators, and it will need three months or longer to let users choose whether to allow the company to use behavioral ads. Until we know how the company plans to ask for that consent, and how it will interpret those answers, we should remain cautious about declaring victory.

Jason Kelley

UN Cybercrime Convention Negotiations Enter Final Phase With Troubling Surveillance Powers Still on the Table

2 months ago

This is Part II  in EFF’s ongoing series about the proposed UN Cybercrime Convention. Read Part I for a quick snapshot of the ins and outs of the zero draft;  Part III for a deep dive on Chapter V regarding international cooperation: the historical context, the zero draft's approach, scope of cooperation, and protection of personal data., and Part IV, which deals with the criminalization of security research.

As one of the last negotiating sessions to finalize the UN Cybercrime Convention approaches, it’s important to remember that the outcome and implications of the international talks go well beyond the UN meeting rooms in Vienna and New York. Representatives from over 140 countries around the globe with widely divergent law enforcement practices, including Iran, Russia,  Saudi Arabia, China, Brazil, Chile, Switzerland, New Zealand, Kenya, Germany, Canada, the U.S., Peru, and Uruguay, have met over the last year to push their positions on what the draft convention should say about across border police powers, access to private data, judicial oversight of prosecutorial practices, and other thorny issues.

As we noted in Part I of this post about the zero draft of the draft convention now on the table, the final text will result in the rewriting of criminal and surveillance laws around the world, as Member States work into their legal frameworks the agreed upon requirements, authorizations, and protections. Millions of people, including those often in the crosshairs of governments for defending human rights and advocating for free expression, will be affected. That’s why we and our international allies have been fighting for users to ensure the draft convention includes robust human rights protections.

Going into the sixth negotiating session, which begins August 21 in New York, the outcome of the talks remains uncertain. A variety of issues are still unresolved, and the finalization of the intricate text faces approaching deadlines. The foundational principle of the negotiations—“nothing is agreed until everything is agreed upon”—underscores the complexity and delicate nature of these discussions. Every element of the draft convention is interrelated, and the resolution of one aspect hinges on the consonance of all other areas of the text.

In Part I, we shared our initial takeaways about the zero draft—some bad provisions were dropped, but ambiguous and overly broad spying powers to investigate any crime, potentially including speech-related crimes, remain. In Part II we’ll delve deeper into one of the convention’s most concerning provisions: domestic surveillance powers.

These provisions endow governments with extensive surveillance powers but only offer weak checks and balances to prevent potential law enforcement overreach. States could misuse such powers by misrepresenting protected speech as cybercrime and exploiting the broad scope of these spying powers beyond their initial purpose. While we successfully advocated for the exclusion of many of the most problematic non-cybercrimes from the draft convention's criminalization section, the draft still permits law enforcement to collect and share data for the investigation of any crime, including content-related crimes, rather than limiting these powers to core cybercrimes. The existing human rights obligations under Article 5, although a positive inclusion, are not sufficiently robust. Combined with the draft’s inherent ambiguity, this could lead to the abusive or disproportionate misapplication of these domestic surveillance powers.

Below, we’ll talk about the domestic spying chapter of the draft convention:

Criminal Procedural Measures (Chapter IV, Article 23)

Article 23 of this chapter expands the scope of the surveillance powers chapter in concerning ways. It describes procedures for dealing not just with specified cybercrimes (those in Articles 6-16), but also for the collection of electronic evidence related to any type of crime, regardless of its severity or whether it is connected to a computer system. This expansion means that the domestic spying powers can be used to investigate any crime, from cybercrimes like hacking to traditional non-cybercrime offenses like drug trafficking—even speech crimes in some jurisdictions, such as insulting a monarch—as long as there's digital evidence involved.

Moreover, Article 23 doesn’t clearly stipulate that the powers established should be used only for specific and targeted criminal investigations or proceedings; though the draft convention’s wording doesn’t explicitly compel service providers to indiscriminately retain data for mass surveillance or fishing expeditions, it does not clearly prevent it. 

New Domestic Spying Powers

The draft chapter on criminal procedural measures introduces six domestic spying powers (expedited preservation of stored data, expedited preservation and partial disclosure of traffic data, production order, search and seizure of stored data, real-time collection of traffic data, and interception of content data) with far-reaching implications for human rights. These powers, if misused or applied overly broadly, hold the potential for serious intrusions into peoples’ private lives. Personal data can reveal a person’s detailed private information, such as contacts, browsing history, location, device details, and patterns of behavior. For instance, a series of web searches, visits, and phone calls can reveal someone's medical condition. Access to personal data represents a significant interference with privacy rights and must be handled with necessary safeguards, including prior judicial authorization, transparency, notification, remedies, time limits, and oversight.

This is why States need to ensure robust, detailed safeguards in the draft convention. EFF and more than 400 NGOs have led the way, introducing in 2014 the Necessary and Proportionate Principles, a set of guidelines that serve as a blueprint on how to apply human rights law to communications surveillance. In court briefings and other material, we’ve discussed the sensitivity of communications data and the application of these principles, including metadata and subscriber data. But, in its current form, the draft convention’s human rights safeguards are simply not robust enough to protect against the misuse of personal data. Several critical requirements are noticeably absent.

Human Rights Safeguards (Articles 5 and 24)

The draft convention has two articles on human rights: a general provision, under Article 5, that applies to the whole draft convention, and Article 24, which describes the conditions and safeguards applied to the new domestic surveillance powers. We believe that the current version of Article 24 fails to provide the robust safeguards needed to protect human rights and to curb law enforcement overreach, and should be revised. 

For instance, it should require prior approval by a judge, set time limits, and give targets a remedy if they are wrongfully spied on. It should also require authorities to explain specifically what facts justify intercepting particular people’s communications, and why. And it should require governments to tell the public how often these powers were used. We’ve been fighting for years around the world for mandatory safeguards like these, for example through amicus briefs in court cases, for example, our joint amicus with ARTICLE19 and Privacy International, as well as through the Necessary and Proportionate Principles.

Article 24 also only mentions the principle of proportionality, omitting other equally important principles, such as legality and necessity. It leaves it up to States to decide what kind of oversight is “appropriate in view of the nature of” each spying power, potentially allowing States to claim that some powers don’t require significant limitations or oversight. There are no specific minimum safeguards or minimum rights for targets of surveillance, not even for the most intrusive surveillance provisions authorizing real-time collection of data or interception of content. 

Meanwhile, Article 5 says the “implementation” of their obligations under the new draft convention shall be “consistent with [states’] obligations under international human rights law.” While that sounds good, it can also be taken to apply only to human rights treaties that a given state has already ratified, providing a big loophole for states like China, Bhutan, Brunei, Holy See, Saudi Arabia, Oman, Palau, Niue, Myanmar, Malaysia, Kiribati, Singapore, South Sudan, Tonga, United Arab Emirates, and Cuba that have failed to sign or ratify the main human rights rules such as the International Covenant on Civil and Political Rights (ICCPR) that other states have ratified. 

In the face of opposition from some states to the safeguards in Articles 5 and 24, and even proposals to delete Article 5, it's critical to consider what's really at stake. Importantly, these Articles are not duplicative nor overarching, but apply strictly to the provisions within the convention itself, offering a crucial scope to these powers. The integration of human rights into the draft convention isn't a surrender of sovereignty but an acknowledgment of states’ shared responsibility to uphold our common human dignity. These rights, universal in nature, set the baseline standards for all to live free from injustice and persecution. It's not just about semantics or legal technicalities, the inclusion of these Articles is pivotal to ensure that the draft UN Cybercrime Convention, throughout all its chapters, embodies states’ commitments to protect and uphold human rights.

In prior sessions, some states argued for the elimination of Article 5 or the consolidation of protections into a single article, but we fervently believe that reinforcing Article 5 is paramount. Most alarmingly, unlike the Consolidated Negotiating Document (CND)—an earlier draft with all states’ positions—Article 24 of the zero draft no longer applies to the international cooperation chapter. This issue is deeply concerning as it opens the door to "jurisdictional exploitation," where a state could exploit these varying domestic safeguards standards to conduct surveillance activities in a less safeguarded jurisdiction that would be considered illegal on its own. By bolstering the universal safeguards standards as proposed in Articles 5 and 24, such potential abuses could be effectively mitigated.

The current version of the draft convention, lacking robust safeguards, risks enabling discretionary or prejudiced use of surveillance powers. Our new joint submission with Privacy International for the upcoming sixth negotiation session has proposed explicit safeguards (in our updated version of Article 24) wherever new powers are granted to law enforcement. We also believe in the need for service providers to deny data requests based on human rights concerns. The necessity to adopt our suggested changes to Article 24 and reinforce Article 5 cannot be overstated. These modifications would clear ambiguities and ensure universally recognized human rights are enforced both within the draft convention and globally. Regrettably, the Budapest Convention lacks a provision similar to Article 5 of the zero draft, which applies to all chapters of the draft UN convention. This is concerning, given that most democratic states have been usually proposing text based on language that already exists in the Budapest Convention itself, and hardly making any modifications.

Definitions Matter (Article 2)

An additional concern arises from the recurring and undefined term "competent authorities," found throughout the zero draft. Notably, the six powers listed above include a similar provision, stating “Each state party shall adopt such legislative and other measures as may be necessary to enable its competent authorities.” However, the term is not yet included in Article 2 of the zero draft, which lists term definitions. In the context of the Budapest Convention, this term has been broadly defined to include a variety of entities, such as prosecutors or the police themselves, who lack impartiality and independence. If the concept of “competent authorities” is copy-pasted verbatim from the Budapest Convention, it risks replicating its shortcomings. This is due to the significant potential for overreach and misuse of powers by authorities who are neither impartial nor independent, raising serious concerns.

Interception of Content Data (Article 30)

This power allows for the real-time interception of content data, granting law enforcement the ability to monitor communications as they occur. Article 30 calls for State Parties to adopt legislative measures empowering “competent authorities” to collect or record content data in real-time for “a range of serious criminal offenses to be determined by domestic law”—with no restriction to cybercrimes. Some states’ notion of “serious offenses” can include speech about politics or religion, or criticism of the government or public officials. It also mandates cooperation from service providers to assist in data collection or recording “within [their] existing technical capabilities.” Furthermore, service providers (which could be any kind of communications intermediary, such as an ISP, social network, or cloud provider) can be ordered to keep the interception confidential.

As we mentioned above, this kind of intrusive surveillance power should require prior approval by a judge, set time limits for interception, and provide targets an effective remedy if they are wrongfully spied on. Our new joint submission with Privacy International for the upcoming sixth negotiation session asks for some of these safeguards to be made explicit and for these powers to be limited to the particular cybercrimes in Articles 6-16 defined by the draft convention.

When the draft convention talks about “content,” it refers to “the substance or purport” of the communication, “such as text, voice messages, audio recordings, video recordings, and other types of information.” We understand that “other types of information” might include things like images or files attached to an email. We’d like to see more clarity, for example, on the treatment of web browsing, to make sure the pages someone visits on a website, like Wikipedia articles, are considered content. This information is potentially sensitive as it can reveal details about a person’s interests and even beliefs. We’ve argued for years that the distinction between content and non-content is obsolete for evaluating the intrusiveness of surveillance and the sensitivity of private information. Quite a lot of privacy questions hinge on this distinction, which often can no longer bear the weight that’s placed on it. (See “ Changing  Technology and Definitions” in the Necessary and Proportionate Principles for one example.) We understand that numerous laws have used these concepts, often following the Budapest Convention, so moving beyond them remains challenging, but we continue fighting for stronger protections for sensitive information of all types.

One way to safeguard this sensitive data in the draft convention is by strengthening Article 24 to expressly apply principles and safeguards uniformly to every kind of surveillance power. But if that’s not possible, we should still provide strong protections like those described above.

Real-time Collection of Traffic Data is Also Highly Intrusive (Article 29)

The collection of traffic data in real time provides an understanding of communication patterns and connections between different entities. This is also a highly intrusive power. Given the sensitive nature of real-time traffic data, which could encompass individuals' locations, relationships, browsing habits, and communication patterns, the adoption of such measures must be handled with utmost care. 

Based on the definition of traffic data under Article 2, it could be argued to include location information. Traffic data refers includes data used to trace and identify the source and destination of a communication, as well as data about the location of the device used for communication, along with the date, time, duration, and type of the communication.

This power can help law enforcement agencies map out networks of cybercriminals. However, the privacy implications are substantial if this power is not strictly limited nor data highly safeguarded, as it could be used to track innocent individuals' online behavior, including their physical location. Such information reveals whether people were in the same place at the same time, and whom they do or don’t communicate with under particular circumstances, and allows the mapping out of their personal relationships and specific locations over a period of time.

Under this Article, state parties must pass domestic laws that authorize their competent authorities to collect or record traffic data in real-time, or compel service providers to do this for them where the providers are able. We’re calling for these powers to be available only for the cybercrimes defined under Articles 6-16 of the draft convention, or at most for serious crimes. In any case, the safeguards we propose under Article 24 should apply to this power, including the need to obtain prior judicial authorization. 

Search and Seizure of Stored Data (Article 28)

This provision allows authorities to search and seize data stored on a computer system, including personal devices. Under this search power, authorities can look through someone's computer or other digital devices, find the data they need, and seize or secure it.

Article 28.4 requires Parties to put in place laws or other measures enabling their competent authorities to order anyone with knowledge about how a particular computer or device works to provide information necessary to search that computer or device. This could involve understanding the device itself, its network, any security measures protecting its data, or other aspects of its operation. This is similar to language already in the Budapest Convention that technologists have been concerned about for years because it might be read to compel an unwilling engineer to help break a security system. In the worst case, it might be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

PI and EFF strongly recommend Article 28.4 be removed in its entirety. If it stays in, the drafters should include material in the explanatory memorandum that accompanies the draft Convention to clarify limits on compelling technologists to reveal confidential information or do work on behalf of law enforcement. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices. Somewhat akin to language in the Budapest Convention, Article 28.3(d) also allows law enforcement to obtain a warrant to delete data from a device.

Production Order (Article 27)

This power comes in two parts. Article 27(a) involves compelling someone to turn over stored data that already exists, typically stored data such as users’ messages, emails, cloud data, or even online backups of people’s devices. Article 27(b) involves compelling service providers to turn over subscriber information, such as the identity or contact information of a particular user.

Subscriber information is often treated as less sensitive and is less stringently protected than other kinds of data, but it’s the most common means by which law enforcement identifies a person associated with some online activity. In some states, governments have sought indiscriminate access to this data (and hence to the ability to identify people) without individualized suspicion. In other cases, it can be turned over voluntarily with no legal process. The ability to put names and addresses to online information is immediately connected with the power to intimidate and repress dissent, and causes people to doubt that they can do anything at all online without having their name readily turned over to the government. These scenarios underscore the need for the draft convention to require legal standards for all kinds of surveillance, including disclosure of subscriber identities.

Courts in many places have recognized the importance of online anonymity and the harm to online speech that results from making it too easy for people’s identities to be exposed. We should ensure that the draft convention doesn’t compel a low standard for identifying online speakers, or preclude courts from adopting a higher standard in the future.

Conclusion

As we approach the final negotiation session of the United Nations Cybercrime Convention, it's important to underscore the need for strengthening the human rights safeguards under Articles 5 and 24 to ensure power is met with accountability rather than tilt the scale in favor of weak safeguards that do not sufficiently curb overreach. 

The draft convention negotiations represent an understated global battle for our digital rights and freedoms. The conclusion of these talks will deeply influence the digital lives of billions of people worldwide. Hence, States must strive to guarantee that the resulting accord doesn't encroach upon our human rights, but rather fortifies and augments them. Digital rights are human rights, and our digital future hinges on the decisions and actions of negotiators. 

Katitza Rodriguez

The White House Acknowledges the Pressure on Section 702, But Much More Reform is Needed

2 months ago

We've seen months of continued public confirmation that Americans’ privacy is being violated by surveillance under Section 702, and widespread criticism from civil society, activists, surveillance-skeptical bipartisan congressional committees, and even the overly timid Privacy and Civil Liberties Oversight Board. Now, a separate  group of White House-appointed experts just flinched on renewing the law without reforms. It’s good to see even the White House signal that improvements are needed, but it was immediately undermined by the tininess of their proposed changes. 

The White House might suddenly be willing to acknowledge that people in the U.S. are sick of having their digital communications harvested and accessible to domestic law enforcement without a warrant, but the review group’s proposed reforms are just a cheap political consolation prize that will do very little to restore the fundamental right to privacy that has been denied to people on U.S. soil who email or call friends or family abroad. 

For example, the 42–page report recommends that the FBI no longer be allowed to search 702 databases when investigating non-national security related crimes. In its current iteration, the FBI is permitted to sift through international communications in hopes of finding evidence of a wide range of crimes in the U.S.-based side of digital conversations. By our rough, most generous calculation, that would eliminate only  0.01% of these so-called FBI backdoor searches (about 16/119,000 backdoor searches according to the latest intelligence community transparency report). We deserve more than these tiny baby steps. 

Still, it’s a start, and should be a call to action for all of us to keep on the pressure.  Despite pushback against the authority, the White House had signaled earlier this summer that it was going to strongly defend Section 702, including all of its  more controversial domestic uses. 

After a bipartisan panel of Congressional committee members signaled that they would be willing to let Section 702 sunset entirely, the White House appears to be slinking from its earlier commitment to unchanged mass surveillance in order to propose the smallest basic reforms imaginable. The report also recommends more training for FBI agents (as if that will change the FBI’s habit of breaking the law), a bit more transparency, attempting to create a “culture of compliance” within the FBI, and various other bureaucratic checks and balances that historically have meant very little to the operations of mass surveillance in the U.S.

This is not a time to let up. If this tiny step signals anything to us, it is that the White House and the intelligence community are getting nervous about the volume of our protests. Come December, we will not be satisfied by perfunctory reforms or research grants to study the harms of aimless surveillance programs. The White House flinched. That means we need to get louder. 

Correction: An earlier version of this post may have led some readers to believe the Privacy and Civil Liberties Oversight Board issued this report. They did not, a separate group of experts issued it

Matthew Guariglia

Government Needs Both the Ability to Talk to Social Media Platforms and Clear Limits, EFF Argues in Brief to Appellate Court

2 months ago
EFF Filed Its Missouri v. Biden Amicus Brief to the U.S. Court of Appeals for the Fifth Circuit

SAN FRANCISCO—Government input into social media platforms’ decisions about user content raises serious First Amendment concerns and the government must be held accountable for violations, but not all such communications are improper, Electronic Frontier Foundation (EFF) argued in an appellate brief filed today. 

“Government co-option of the content moderation systems of social media companies is a serious threat to freedom of speech,” the brief notes, although “there are clearly times when it is permissible, appropriate, and even good public policy for government agencies and officials to non-coercively communicate with social media companies about the user speech they publish on their sites.” 

EFF filed the amicus brief to the U.S. Court of Appeals for the Fifth Circuit in Missouri v. Biden, a lawsuit brought by Louisiana, Missouri, and several individuals alleging that federal government agencies and officials illegally pushed social media platforms to censor content about COVID safety measures and vaccines, elections, and Hunter Biden’s laptop, among other issues.   

Judge Terry A. Doughty of the U.S. District Court for the Western District of Louisiana sided with the plaintiffs, issuing a broad preliminary injunction July 4. The appellate court has stayed the injunction temporarily.  

EFF’s filed its brief on behalf of neither party, but rather to provide the appellate court with useful information about the competing interests involved and the context of social media platform content moderation in which they must be applied. 

Even the biggest, best-resourced social media companies struggle with content moderation, often frustrating users. In search of fairness and consistency in their decisions, social media companies need to draw on outside resources and expertise. This “networked governance” can include trusted flagger programs, trust and safety councils, or external stakeholder engagement teams, as well as as-needed consultations with individual and organizational experts including government agencies. 

Such government input does raise unique and worrisome First Amendment issues, but it can’t be forbidden entirely, the brief argues. 

“The distinction between proper and improper speech is often obscure, leaving ample gray area for courts reviewing such cases to grapple in. But grapple in it they must,” the brief says. “The district court did not adequately distinguish between improper and proper communications in either its analysis or preliminary injunction. The preliminary injunction is internally inconsistent with exceptions that seem to swallow many of its prohibitions. It does not provide adequate guidance to either the government or to anyone else seeking to hold the government to its proscriptions. This Court must independently review the record and make the searching distinctions that the district court did not.” 

For EFF’s brief https://www.eff.org/document/missouri-v-biden-amicus-brief

For more on Missouri v. Biden: https://www.courtlistener.com/docket/67563473/state-of-missouri-v-biden/ 

Karen Gullo

Deja Vu: The FBI Proves Again It Can’t be Trusted with Section 702

2 months 1 week ago

We all deserve privacy in our communications, and part of that is trusting that the government will only access them within the limits of the law. But at this point, it’s crystal clear that the FBI doesn’t believe that either our rights nor the limitations that Congress has placed upon the bureau matter when it comes to the vast amount of information about us collected under FISA Section 702. 

How many times will the FBI get caught with their hand in the cookie jar of our constitutionally protected private communications without losing these invasive and unconstitutional powers?

The latest exhibit in this is in yet another newly declassified opinion of the Foreign Intelligence Surveillance Court (FISC). This opinion further reiterates what we already know, that the Federal Bureau of Investigation simply cannot be trusted with conducting foreign intelligence queries on American persons.  Regardless of the rules, or consistent FISC disapprovals, the FBI continues to act in a way that shows no regard for privacy and civil liberties.

According to the declassified FISC ruling, despite paper reforms which the FBI has touted that it put into place to respond to the last time it was caught violating U.S. law, the Bureau conducted four queries for the communications of a state senator and a U.S. senator.  And they did so without even meeting their own  already-inadequate standards for these kinds of searches.

How many times will the FBI get caught with their hand in the cookie jar of our constitutionally protected private communications without losing these invasive and unconstitutional powers?

Specifically, this disclosure concerns Section 702 of the 2008 Foreign Intelligence Surveillance Amendments Act, which authorizes the collection of overseas communications that can be queried by intelligence agencies in national security investigations under the oversight of the FISC. The FBI has access to the collected information, but only for limited purposes—purposes which it routinely and grossly oversteps.

Apart from the FBI’s apparent failure to even abide by its own rules, the bigger problem with this arrangement—even under the law—is that we live in a globalized world where U.S. persons regularly communicate with people in other countries. This creates a massive pool of digital communications in which one side of the conversation is an American on U.S. soil. The FBI, investigating crimes in the U.S., has spent the better part of 15 years sifting through these communications without even a warrant. So the fact that they cannot even abide by their own rules, much less the ones set by Congress, is a big deal.

But now we have a chance to close this unconstitutional loophole and block the FBI—or any other government agency—from searching any of our communications without a warrant. Section 702 is set to expire in December 2023.  Sadly, both the FBI and Biden Administration have signaled that they are all in when it comes to trying to keep open the FBI’s warrantless backdoor searches of 702 data. They like their hands fully in the cookie jar and at this point are likely confident that, even when they get caught, the FISC won’t take any serious steps to stop them.

But they won’t get that renewal without a fight. After several hearings in the House Judiciary Committee, it is clear that there is bipartisan support for the idea that Section 702 must drastically change, or else face termination (called sunsetting in DC) entirely. The Privacy and Civil Liberties Oversight Board (PCLOB), which has been unwilling to seriously take on 702 violations, even suggested before Congress that some bare minimum of changes should be made to the surveillance programs in order to protect the privacy rights of Americans.

While we think it’s time for 702 to end entirely, and for any future programs to start from scratch in protecting the privacy of digital communications. EFF will continue to fight to make sure that any bill that does renew Section 702 closes the government’s warrantless access to U.S. communications, minimizes the amount of data collected, and increases transparency. Anything less than that would signal a continued indifference, or contempt, to our right to privacy.

This recent disclosure proves, in a Groundhog Day-like fashion, that the FBI is not going to suddenly become good at self-control when it comes to access to our data. If the privacy of our communications—including communications with people abroad—is going to actually matter, Section 702 must be irrevocably changed or jettisoned entirely.

Matthew Guariglia

Maryland Supreme Court: Police Can’t Search Digital Data When Users Revoke Consent

2 months 1 week ago

This post was co-authored by EFF legal intern Virginia Kennedy

Under the Fourth Amendment, police can search your home, your computer, and other private spaces without a warrant or even probable cause if you freely and voluntarily consent to the search. But even when someone consents to a search, they should be able to change their mind. Say, for example, if a lawyer gives them better advice. But as a recent case from the Maryland Supreme Court demonstrates, searches of digital data stored on electronic devices raise unique questions about consent. If you consent to a search of your computer and police make a copy of the data on the computer, can they still examine that copy if you withdraw that consent? In State v. McDonnell, the Maryland Supreme Court sensibly answered no.

In June 2019, police officers visited Mr. McDonnell’s home and requested to search his home, computer, and phone as part of their investigation into the distribution of child pornography . Mr. McDonnell originally declined the search, but later signed a consent form allowing the agents to search his home and seize his phone and computer. The form included a clause stating that “I understand that I may withdraw my consent at any time.” After Mr. McDonnell’s electronics had been seized and their contents copied, but before the contents had been examined, Mr. McDonnell’s lawyer sent an email withdrawing consent to “the seizure of [Mr. McDonnell’s] laptop, or examination of its contents.” But agents searched the contents of the computer anyway. McDonnell moved to suppress the evidence that came from the search of his computer after he had revoked his consent.

It is incorrect to claim that a person lacks a reasonable expectation of privacy for the copy of computer data after they have revoked their consent

EFF and the National Association of Criminal Defense Lawyers filed an amicus brief in the Maryland Supreme Court arguing that law enforcement’s warrantless examination of the copy violated the Fourth Amendment. Specifically, we argued that regardless of the location, people have a heightened interest in their digital data and that the consent exception to the Fourth Amendment’s warrant requirement should reflect that heightened privacy interest. Thanks to the breakneck technological advancements in storage capabilities people are storing more and more sensitive information on their phones and computers. With much more ease, law enforcement can now access huge swaths of private information with a few clicks to aid in their investigations. And, of course, police often do so with very little judicial oversight. Ultimately, there is no difference between “computer data” and a “copy of computer data” to that data’s owner. Therefore, it is incorrect to claim that a person lacks a reasonable expectation of privacy for the copy of computer data after they have revoked their consent.

The Maryland Supreme Court unanimously agreed, holding that because Mr. McDonnell withdrew consent before the government examined the data, he did not lose his reasonable expectation of privacy in the data and that the government’s search violated the Fourth Amendment. Notably, the court found that Mr. McDonnell had a “privacy interest in the data itself,” even though he had legally lost a “possessory interest” in the copy by consenting to the copying. This holding closely follows the Maryland court’s ruling last year that although someone can lose ownership of a physical device by abandoning it, they do not necessarily abandon privacy in the device’s contents. The state argued that Mr. McDonnell retained no expectation of privacy for any copies that the government made with consent, analogizing copying digital data to photocopying a piece of paper . Thankfully, the Court disagreed, stating that, “[d]ata stored on electronic devices is both qualitatively and quantitatively different from physical analogues.” A better analogy, the court wrote, would be the “interruption of a consented-to search of a home by withdrawal of consent—police would have to promptly leave the home and seek a warrant, or other authorization, in order to further search.”

While the language of the form Mr. McDonnell signed in this case was not clear enough to grant the government permanent authorization to search the copy it made, the court declined to answer whether more unambiguous language could strip an individual of their ability to withdraw consent after a copy has been made. That’s unfortunate. To the extent police should ever be allowed to ask for consent to search, that consent must never be taken for granted or obtained through coercion. A consent form that deprived the signed of the right to change their mind the moment they signed would call into question the voluntariness of the consent itself. Although the court left this important question open, McDonnell is a welcome decision in a time of rampant data collection and little oversight over who can access it.

Andrew Crocker

The U.K. Government Is Very Close To Eroding Encryption Worldwide 

2 months 1 week ago

The U.K. Parliament is pushing ahead with a sprawling internet regulation bill that will, among other things, undermine the privacy of people around the world. The Online Safety Bill, now at the final stage before passage in the House of Lords, gives the British government the ability to force backdoors into messaging services, which will destroy end-to-end encryption. No amendments have been accepted that would mitigate the bill’s most dangerous elements. 

TAKE ACTION

TELL the U.K. Parliament: Don't Break Encryption

If it passes, the Online Safety Bill will be a huge step backwards for global privacy, and democracy itself. Requiring government-approved software in peoples’ messaging services is an awful precedent. If the Online Safety Bill becomes British law, the damage it causes won’t stop at the borders of the U.K. 

The sprawling bill, which originated in a white paper on “online harms” that’s now more than four years old, would be the most wide-ranging internet regulation ever passed. At EFF, we’ve been clearly speaking about its disastrous effects for more than a year now. 

It would require content filtering, as well as age checks to access erotic content. The bill also requires detailed reports about online activity to be sent to the government. Here, we’re discussing just one fatally flawed aspect of OSB—how it will break encryption. 

An Obvious Threat To Human Rights

It’s a basic human right to have a private conversation. To have those rights realized in the digital world, the best technology we have is end-to-end encryption. And it’s utterly incompatible with the government-approved message-scanning technology required in the Online Safety Bill. 

This is because of something that EFF has been saying for years—there is no backdoor to encryption that only gets used by the “good guys.” Undermining encryption, whether by banning it, pressuring companies away from it, or requiring client side scanning, will be a boon to bad actors and authoritarian states.

The U.K. government wants to grant itself the right to scan every message online for content related to child abuse or terrorism—and says it will still, somehow, magically, protect peoples’ privacy. That’s simply impossible. U.K. civil society groups have condemned the bill, as have technical experts and human rights groups around the world

The companies that provide encrypted messaging—such as WhatsApp, Signal, and the UK-based Element—have also explained the bill’s danger. In an open letter published in April, they explained that OSB “could break end-to-end encryption, opening the door to routine, general and indiscriminate surveillance of personal messages of friends, family members, employees, executives, journalists, human rights activists and even politicians themselves.” Apple joined this group in June, stating publicly that the bill threatens encryption and “could put U.K. citizens at greater risk.” 

U.K. Government Says: Nerd Harder

In response to this outpouring of resistance, the U.K. government’s response has been to wave its hands and deny reality. In a response letter to the House of Lords seen by EFF, the U.K.’s Minister for Culture, Media and Sport simply re-hashes an imaginary world in which messages can be scanned while user privacy is maintained. “We have seen companies develop such solutions for platforms with end-to-end encryption before,” the letter states, a reference to client-side scanning. “Ofcom should be able to require” the use of such technologies, and where “off-the-shelf solutions” are not available, “it is right that the Government has led the way in exploring these technologies.” 

The letter refers to the Safety Tech Challenge Fund, a program in which the U.K. gave small grants to companies to develop software that would allegedly protect user privacy while scanning files. But of course, they couldn’t square the circle. The grant winners’ descriptions of their own prototypes clearly describe different forms of client-side scanning, in which user files are scoped out with AI before they’re allowed to be sent in an encrypted channel. 

The Minister completes his response on encryption by writing: 

We expect the industry to use its extensive expertise and resources to innovate and build robust solutions for individual platforms/services that ensure both privacy and child safety by preventing child abuse content from being freely shared on public and private channels.

This is just repeating a fallacy that we’ve heard for years: that if tech companies can’t create a backdoor that magically defends users, they must simply “nerd harder.” 

British Lawmakers Still Can And Should Protect Our Privacy

U.K. lawmakers still have a chance to stop their nation from taking this shameful leap forward towards mass surveillance. End-to-end encryption was not fully considered and voted on during either committee or report stage in the House of Lords. The Lords can still add a simple amendment that would protect private messaging, and specify that end-to-end encryption won’t be weakened or removed.

Earlier this month, EFF joined U.K. civil society groups and sent a briefing explaining our position to the House of Lords. The briefing explains the encryption-related problems with the current bill, and proposes the adoption of an amendment that will protect end-to-end encryption. If such an amendment is not adopted, those who pay the price will be “human rights defenders and journalists who rely on private messaging to do their jobs in hostile environments; and … those who depend on privacy to be able to express themselves freely, like LGBTQ+ people.” 

It’s a remarkable failure that the House of Lords has not even taken up a serious debate over protecting encryption and privacy, despite ample time to review every every section of the bill. 

TAKE ACTION

TELL the U.K. Parliament: PROTECT Encryption—And our privacy

Finally, Parliament should reject this bill because universal scanning and surveillance is abhorrent to their own constituents. It is not what the British people want. A recent survey of U.K. citizens showed that 83% wanted the highest level of security and privacy available on messaging apps like Signal, WhatsApp, and Element. 

Documents related to the U.K. Online Safety Bill: 

Joe Mullin

Rights Groups Urge EU’s Thierry Breton: No Internet Shutdowns for Hateful Content

2 months 1 week ago

EFF and 66 human rights and free speech advocacy groups across the globe today called on EU Internal Commissioner Thierry Breton to clarify that the Digital Services Act (DSA)—new regulations aimed at reining in Big Tech companies that control the lion’s share of online speech worldwide—does not allow internet shutdowns to be used as a weapon to punish platforms for not removing “hateful content.”

Arbitrary blocking of online platforms for not following procedural safeguards to take down hate speech violates human rights under international law, the groups said in a letter to Breton asking him to clarify comments he made in a July 10 interview.  Platforms will be required to remove hateful content “immediately” or they will face “immediate sanctions” and be banned from operating “on our territory,” Breton said. The DSA imposes new legal requirements on TikTok, Instagram, and other very large social media platforms effective July 25.

In his comments on a radio station about recent riots in France, Breton, a former French minister, brought up the potential of restricting social media platforms under the DSA amidst existing civil turmoil in the nation.

Arbitrary blocking of online platforms and other forms of internet shutdowns are never a proportionate measure and impose disastrous consequences for people’s safety and worsen the spread of misinformation, EFF and its international partners said in the letter.

With non-EU countries embracing DSA-like regulations, Breton’s comments, without the requested clarification, threaten to reinforce the weaponization of internet shutdowns around the world, and give cover to governments using arbitrary blocking to shroud violence and serious human rights abuse.

In the letter, civil society groups articulated the importance of a human-rights friendly implementation of the DSA. However, a recent French draft law on the regulation of the digital space requires browser-based website blocking, which is an unprecedented government censorship tool.

The letter is here

Karen Gullo

Electronic Frontier Foundation to Present Annual EFF Awards to Alexandra Asanovna Elbakyan, Library Freedom Project, and Signal Foundation

2 months 1 week ago
The 2023 EFF Awards will be presented in a live ceremony on Thursday, Sept. 14 in San Francisco.

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Alexandra Asanovna Elbakyan, Library Freedom Project, and Signal Foundation will receive the 2023 EFF Awards for their vital work in helping to ensure that technology supports freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law. 

Hosted by renowned science fiction author, activist, journalist, and EFF Special Advisor Cory Doctorow, the EFF Awards ceremony will start at 6:30 pm PT on Thursday, Sept. 14, 2023 at the Regency Lodge, 1290 Sutter St. in San Francisco. Guests can register at https://eff.org/effawards. The ceremony will be recorded and video will be made available at a later date. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“The free flow of information and knowledge, as well as the privacy of our communications, are important pillars of an internet that advances freedom, justice, and innovation for all,” EFF Executive Director Cindy Cohn said. “This year’s EFF Award winners are tireless champions for these values and are helping build a world in which everyone can learn and speak freely and securely. They are an inspiration to us, as well as to people around the globe. We are honored to give them our thanks and some small part of the recognition they deserve.” 

Alexandra Asanovna Elbakyan — EFF Award for Access to Scientific Knowledge 

Kazakhstani computer programmer Alexandra Asanovna Elbakyan founded Sci-Hub in 2011 to provide free and unrestricted access to all scientific knowledge. Launched as a tool for providing quick access to articles from scientific journals, Sci-Hub has grown a database of more than 88.3 million research articles and books freely accessible for anyone to read and download; much of this knowledge otherwise would be hidden behind paywalls. Sci-Hub is used by millions of students, researchers, medical professionals, journalists, inventors, and curious people all over the world, many of whom provide feedback saying they are grateful for this access to knowledge. Some medical professionals have said Sci-Hub helps save human lives; some students have said they wouldn't be able to complete their education without Sci-Hub's help. Through Sci-Hub, Elbakyan has strived to shatter academic publishing’s monopoly-like mechanisms in which publishers charge high prices even though authors of articles in academic journals receive no payment. She has been targeted by many lawsuits and government actions, and Sci-Hub is blocked in some countries, yet she still stands tall for the idea that restricting access to information and knowledge violates human rights. 

Library Freedom Project — EFF Award for Information Democracy 

Library Freedom Project is radically rethinking the library professional organization by creating a network of values-driven librarian-activists taking action together to build information democracy. LFP offers trainings, resources, and community building for librarians on issues of privacy, surveillance, intellectual freedom, labor rights, power, technology, and more—helping create safer, more private spaces for library patrons to feed their minds and express themselves. Their work is informed by a social justice, feminist, anti-racist approach, and they believe in the combined power of long-term collective organizing and short-term, immediate harm reduction. 

Signal Foundation — EFF Award for Communications Privacy 

Since 2013, with the release of the unified app and the game-changing Signal Protocol, Signal has set the bar for private digital communications. With its flagship product, Signal Messenger, Signal provides real communications privacy, offering easy-to-use technology that refuses the surveillance business model on which the tech industry is built. To ensure that the public doesn't have to take Signal's word for it, Signal publishes their code and documentation openly, and licenses their core privacy technology to allow others to add privacy to their own products. Signal is also a 501(c)(3) nonprofit, ensuring that investors and market pressure never provides an incentive to weaken privacy in the name of money and growth. This allows Signal to stand firm against growing international legislative pressure to weaken online privacy, making it clear that end-to-end encryption either works for everyone or is broken for everyone—there is no half measure. 

To register for this event: https://eff.org/effawards

For past honorees: https://www.eff.org/awards/past-winners 

Josh Richman

FBI Seizure of Mastodon Server Data is a Wakeup Call to Fediverse Users and Hosts to Protect their Users

2 months 1 week ago

We’re in an exciting time for users who want to take back control from major platforms like Twitter and Facebook. However, this new environment comes with challenges and risks for user privacy, so we need to get it right and make sure networks like the Fediverse and Bluesky are mindful of past lessons.

In May, Mastodon server Kolektiva.social was compromised when one of the server’s admins had their home raided by the FBI for unrelated charges. All of their electronics, including a backup of the instance database, were seized.

It’s a chillingly familiar story which should serve as a reminder for the hosts, users, and developers of decentralized platforms: if you care about privacy, you have to do the work to protect it. We have a chance to do better from the start in the fediverse, so let’s take it.

A Fediverse Wake-up Call

A story where “all their electronics were seized” echoes many digital rights stories. EFF’s founding case over 30 years ago, Steve Jackson Games v. Secret Service, was in part a story about the overbroad seizures of equipment in the offices of Steve Jackson Games in Texas, based upon unfounded claims about illegal behavior in a 1990s version of a chat room. That seizure nearly drove the small games company out of business. It also spurred the newly-formed EFF into action. We won the case, but law enforcement's blunderbuss approach continues through today.

This overbroad police “seize it all” approach from the cops must change. EFF has long argued that seizing equipment like servers should only be done when it is relevant to an investigation. Any seized digital items that are not directly related to the search should be quickly returned, and copies of information should be deleted as soon as police know that it is unrelated—as they also should for nondigital items that they seize. EFF will continue to advocate for this in the courts and in Congress, and all of us should continue to demand it. 

Law enforcement must do better, even when they have a warrant (as they did here). But we can’t reasonably expect law enforcement to do the right thing every time, and we still have work to do to shift the law more firmly in the right direction. So this story should also be a wake-up call for the thousands of hosts in the growing decentralized web: you have to have your users’ backs too.

Why Protecting the Fediverse Matters

Protecting user privacy is a vital priority for the Fediverse. Many fediverse instances, such as Kolektiva, are focused on serving marginalized communities who are disproportionately targeted by law enforcement. Many were built to serve as a safe haven for those who too often find themselves tracked and watched by the police. Yet this raid put the thousands of users this instance served into a terrible situation. According to Kolektiva, the seized database, now in the FBI’s possession, includes personal information such as email addresses, hashed passwords, and IP addresses from three days prior to the date the backup was made. It also includes posts, direct messages, and interactions involving a user on the server. Because of the nature of the fediverse, this also implicates user messages and posts from other instances. 

To make matters worse, it appears that the admin targeted in the raid was in the middle of maintenance work which left would-be-encrypted material on the server available in unencrypted form at the time of seizure.  

Most users are unaware that, in general, once the government lawfully collects information, under various legal doctrines they can and do use it for investigating and prosecuting crimes that have nothing to do with the original purpose of the seizure. The truth is, once the government has the information, they often use it and the law supports this all too often. Defendants in those prosecutions could challenge the use of this data outside the scope of the original warrant, but that’s often cold comfort.

What is a decentralized server host to do?  

EFF’s “Who Has Your Back”  recommendations for protecting your users when the government comes knocking aren’t just for large centralized platforms. Hosts of decentralized networks must include possibilities like government seizure in their threat model and be ready to respond in ways that stand with their users.

First of all, basic security practices that apply to any server exposed to the internet also apply to Mastodon. Use firewalls and limit user access to the server as well as the database. If you must keep access logs, keep them only for a reasonable amount of time and review them periodically to make sure you’re only collecting what you need. This is true more broadly: to the extent possible, limit the data your server collects and stores, and only store data for as long as it is necessary. Also stay informed about possible security threats in the Mastodon code, and update your server when new versions are released.

Second, make sure that you’ve adopted policies and practices to protect your users, including clear and regular transparency reports about law enforcement attempts to access user information and policies about what you will do if the cops show up – things like requiring a warrant for content, and fighting gag orders. Critically, that should include a promise to notify your users as soon as possible about any law enforcement action where law enforcement gained access to their information and communications. EFF’s Who Has Your Back pages go into detail about these and other key protections. EFF also prepared a legal primer for fediverse hosts to consider.

In Kolektiva’s case, hosts were fairly slow in giving notice. The raid occurred in mid-May and the notice didn’t come until June 30, about six weeks later. That’s quite a long delay, even if it took Kolektiva a while to realize the full impact of the raid.  As a host of other people’s communications, it is vital to give notice as soon as you are able, as you generally have no way of knowing how much risk this information poses to your users and must assume the worst. The extra notice to users is vital for them to take any necessary steps to protect themselves.

What can users do?

For users joining the fediverse, you should evaluate the about page for a given server, to see what precautions (if any) they outline. Once you’ve joined, you can take advantage of the smaller scale of community on the platform, and raise these issues directly with admin and other users on your instance. Insist that the obligations from Who has Your Back, including to notify you and to resist law enforcement demands where possible, be included in the instance information and terms of service. Making these commitments binding in the terms of service is not only a good idea, it can help the host fight back against overbroad law enforcement requests and can support later motions by defendants to exclude the evidence.

Another benefit of the fediverse, unlike the major lock-in platforms, is that if you don’t like their answer, you can easily find and move to a new instance. However, since most servers in this new decentralized social web are hosted by enthusiasts, users should approach these networks mindful of privacy and security concerns. This means not using these services for sensitive communications, being aware of the risks of social network mapping, and taking some additional precautions when necessary like using a VPN or Tor, and a temporary email address.

What can developers do?

While it would not have protected all of the data seized by the FBI in this case, end-to-end encryption of direct messages is something that has been regrettably absent from Mastodon for years, and would at least have protected the most private content likely to have been on the Kolektiva server. There have been some proposals to enable this functionality, and developers should prioritize finding a solution. 

The Kolektiva raid should be an important alarm bell for everyone hosting decentralized content. Police raids and seizures can be difficult to predict, even when you’ve taken a lot of precautions. EFF’s Who Has Your Back recommendations and, more generally, our Legal Primer for User Generated Content and the Fediverse should be required reading. And making sure you have your users’ backs should be a founding principle for every server in the fediverse. 

Update: This post's title has been updated to clarify that the FBI seized Mastodon server data, not control over the server itself.

Cindy Cohn

The NDAA is No Place for Sweeping Internet Legislation Like the STOP CSAM Act

2 months 1 week ago

The STOP CSAM Act of 2023 would undermine services offering end-to-end encryption and push internet companies to take down lawful user speech. This dangerous bill would threaten security and free speech on the internet—but incredibly, it may pass Congress without even being seriously debated.  Some lawmakers are seeking to attach this dangerous proposal to the National Defense Authorization Act (NDAA), the “must-pass” military budget bill.  

As we’ve written before, the STOP CSAM Act would create new criminal and civil claims against online providers based on broad terms and low standards, and will undermine digital security for all internet users. It does three main things:  

  • It makes it a crime for providers to “knowingly host or store” CSAM or “knowingly promote or facilitate” the sexual exploitation of children, including the creation of CSAM, on their platforms. 
  • It creates a new civil claim and corresponding Section 230 carveout based on the lower standard of “recklessness”.   
  • It creates a notice-and-takedown system overseen by a newly created Child Online Protection Board. 

Taken together, these provisions greatly endanger encrypted communications and protections that ensure platforms can operate. It harms internet users who rely on intermediaries to speak online—that is, all of us.

Providing Encryption Isn’t a Crime

This bill introduces the same misleading “encryption exception” found in the EARN IT Act, which we’ve written about at length. The exception purports to protect online platforms from liability for offering encrypted services, but it specifically allows the use of encryption to be introduced as evidence of the facilitation of illegal material. 

If encryption can be introduced as evidence of the facilitation of illegal material, the bill potentially allows people to be sued or prosecuted for even merely providing an encrypted app. It’s likely that plaintiffs will argue that companies merely offering end-to-end encryption are “recklessly” enabling the sharing of illegal content on their platforms by failing to scan for and remove that content.

Though the bill specifies that a platform must have “knowledge” of the illegal content in order to be criminally liable, and that it is a defense that the company cannot remove it (such as when it is encrypted content uploaded by a user without the providers’ knowledge), the question remains why this new crime is needed when it is already a federal crime for anyone to promote or distribute CSAM. Existing law already requires online service providers who have actual knowledge of “apparent” CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC), which is essentially a government entity.  

TAKE ACTION

TELL CONGRESS NOT TO OUTLAW ENCRYPTED APPS

Bad Actors Will Push Online Services to Censor Legal Speech 

The STOP CSAM Act  also poses significant threats to free speech online.  

Section 230 creates the legal breathing room for internet intermediaries to create online spaces for people to freely communicate around the world, with low barriers to entry. This law would create a new exception to Section 230’s partial immunity, exposing providers to more lawsuits. 

The bill also creates a convoluted notice-and-takedown regime that allows individuals to file complaints against companies to remove alleged CSAM from their platforms. This system would certainly be gamed by bad actors, exposing platforms and users to bogus takedown requests, likely involving First Amendment-protected content involving sexuality, sexual orientation, or gender identity. 

To mitigate the risk of new civil lawsuits and administrative proceedings, platforms would censor more and more user content and accounts, with minimal regard as to whether that content is in fact legal. 

If elected lawmakers want to limit our free speech, they should just admit it and debate the issue. Instead, they’re hiding the ball. It’s outrageous to include a law that would have such a huge impact on speech in must-pass legislation without full discussion. Please join us in telling Congress not to pass the STOP CSAM Act. 

TAKE ACTION

TELL CONGRESS NOT TO OUTLAW ENCRYPTED APPS

Jason Kelley

Digital Rights Updates with EFFector 35.9

2 months 1 week ago

There's been a lot happening in the digital rights movement, so if you need to catch up, you've come to the right place! Learn more about the latest news with EFF's EFFector newsletter, featuring updates, upcoming events, and more. This latest issue features our work pushing for human rights protections in the UN Cybercrime Treaty, a report about platform regulation in Brazil, our thoughts on generative AI policy, and much more.

Learn more about the latest happenings by reading the full newsletter here, or you can listen to the audio version below!

Listen on YouTube

EFFector 35.9 | Fighting for Internet Freedoms Around the World

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Court Rejects Efforts to Identify Anonymous Webhost

2 months 1 week ago

Note: The post was co-authored by EFF legal intern Janelle Robins

In a victory for online expression, a U.S. District Court judge has quashed a subpoena aimed at revealing the identity of an anonymous person who simply hosted a website.

The background facts tell a too-familiar tale in the age of social media: a New Jersey man was fired from his job after an anti-fascist Twitter account suggested he was involved with a white supremacist organization. He and several others targeted by the Twitter account responded by suing everyone they claimed were even vaguely connected to the doxxing, including Torch Antifa, a network of antifascist activists. Those conspiracy claims were dismissed, but he nevertheless issued a subpoena to Cloudflare seeking the identity of a web host whose only relation to the case was that they had previously hosted Torch Antifa’s website.

EFF moved to quash the subpoena on behalf of that host, explaining that the proposed order violated Doe’s First Amendment rights to anonymity and free association, as well as their privacy interests. A legitimately harmed person can, in some instances, pierce a Doe’s anonymity. But they must show why it is justified, especially when the Doe is a third-party to the lawsuit. Given that the conspiracy charge was dismissed, there was no way the Plaintiff could meet that standard.

EFF has been working for decades in similar cases because we know that people with widely varying goals and interests rely on First Amendment protections to freely associate, advocate, and seek out information. In the digital era, where third-party service providers play a pivotal role in hosting content, the First Amendment’s safeguard of a service provider's right to make anonymous decisions about whose speech they will disseminate is particularly crucial.  Unfortunately, legal process can be exploited to uncover the identities of these providers in order to harass, intimidate, or silence them. That risk was clear here, where EFF’s client had reason to fear retaliation given that the person seeking their identity had been publicly associated with a white supremacist organization.

Corynne McSherry

DC Circuit FOSTA Ruling Lets a Bad Law Stay on the Books, But Offers Meaningful Protection for Some Sex Work Forums and Sex Workers Using Online Services

2 months 1 week ago

The Court of Appeals for the D.C. Circuit on July 7 affirmed the dismissal of Woodhull Freedom Foundation v. US, the constitutional challenge to FOSTA. That’s certainly disappointing: this bad law will now stay on the books.

But the good news is that FOSTA stays on the books in a more limited manner: the court sharply narrowed FOSTA to address the arguments that its overly broad reach criminalized protected speech and caused a massive chilling effect on online speech. That’s a significant improvement. It means that sex workers, advocates for sex workers’ rights, and other online speakers are better protected from prosecution, or from the chilling effects that come with fear of prosecution. But the court’s opinion still leaves many questions unanswered and uncertainty for the online intermediaries upon whom sex workers rely.

 FOSTA, the Allow Victims to Fight Online Sex Trafficking Act, contained multiple speech-restricting provisions. Most significantly, it:

  • Created new federal criminal and civil liability for anyone who “owns, manages, or operates an interactive computer service” and speaks, or hosts third-party content to “promote or facilitate the prostitution of another person.”
  • Expanded criminal and civil liability to treat any online speaker or platform that allegedly assists, supports, or facilitates sex trafficking as “participating in a venture” with individuals directly engaged in sex trafficking.
  • Carved out significant exceptions to the immunity provisions of 47 U.S.C. § 230 to create new criminal and civil liability for online platforms based on whether the content expressed by their users’ speech might be seen as promoting or facilitating prostitution, or as assisting, supporting, or facilitating sex trafficking.

Five plaintiffs filed a constitutional challenge to the law on the grounds that it silences protected speech by muzzling online speakers and forcing online intermediaries to censor their users. The lawsuit was supported by expert declarations and amicus briefs that showed how the law led numerous legally operated online sites to shut down, thwarting both harm reduction efforts and law enforcement, and forcing sex workers back on to the streets across the country.

The plaintiffs were Woodhull Freedom Foundation, Human Rights Watch, the Internet Archive, and two individuals, Eric Koszyk, a licensed massage therapist whose business was blocked when Craigslist shut down its Personal Services listings because of FOSTA, and Alex Andrews, an advocate for sex workers’ health and safety and the co-founder of RateThatRescue.org. EFF represented the plaintiffs along with lead counsel, Robert Corn-Revere (initially with Davis, Wright Tremaine LLP and later with the Foundation for Individual Rights & Expression); Walters Law Group; and Daphne Keller.

In an earlier ruling, the D.C. Circuit concluded that the Woodhull plaintiffs had standing to challenge FOSTA, noting that both Andrews (as the operator of an online discussion forum) and Koszyk (as an individual affected by Craigslist’s response to FOSTA) had raised legitimate concerns about the law’s sweep.

Now, the D.C. Circuit has rejected plaintiffs’ claims about FOSTA’s unconstitutionality. It held instead that the law did not cover a significant amount of protected speech, relying in large part on a Supreme Court opinion issued just a few weeks before.

With respect to FOSTA’s expanded definition of “participation in a venture,” the court ruled that “knowingly assisting, supporting, or facilitating” sex trafficking did not carry its plain everyday meaning, which would bring within the law’s sweep protected speech, such as advocacy for the decriminalization of sex work and the general distribution of health and safety information. Instead, the court said, FOSTA’s prohibition was limited to the criminal law concept of aiding and abetting.

Aiding and abetting means helping someone with the specific intent to further the commission of a crime. FOSTA thus only prohibits “aiding and abetting a venture that one knows to be engaged in sex trafficking while knowingly benefiting from that venture.” The court found that this definition did not include any speech protected by the First Amendment since speech in furtherance of an illegal act like that is not protected.

The court also found that the fault requirement in this part of FOSTA, despite initial disparities debated among federal courts, is now accepted as actual knowledge. That is, the law bars only knowing participation in a sex trafficking venture, not participation when one “should have known” they were participating in a sex trafficking venture.

The court adopted a similarly narrow interpretation of the new federal criminal law that made it illegal to operate an online service to “promote or facilitate the prostitution of another person.” The court held that “promoting” in the statute means only aiding and abetting prostitution, that is, “owning, managing, or operating an online platform with the intent to recruit, solicit, or find a place of business for a sex worker.” “Facilitation” goes beyond those specific acts to also include other forms of aiding and abetting. But problematically, the court doesn’t specify what these are.

The court also emphasized that the law did not prohibit “promoting or facilitating” prostitution, as such, but rather “the prostitution of another person,” which also carries a specialized meaning under criminal law. The examples the court gives imply that the “prostitution of another person” essentially means pimping—it is a crime to “recruit, solicit or find a place of business for a sex worker,” “running a ‘prostitution business,’” “actions that cause a specific person to ‘be prostituted’ or helps to orchestrate their prostitution,” “procuring a prostitute for a person,” “and other actions that aid and abet prostitution, like getting someone addicted to drugs, stealing their money or passports, or threatening them against leaving.”

The court explained that as written, “The language bespeaks something done to a particular person–aiding their prostitution by someone else or some force independent of the person being prostituted.” FOSTA does not “proscribe facilitating prostitution more generally.”

This limited reading of FOSTA offers significant protection to two categories of speakers.

First, FOSTA does not apply to sex workers themselves who use online services for their own sex work. This is not completely surprising, since FOSTA chiefly targeted online intermediaries, not the speakers themselves.

Second, and critically, FOSTA does not apply to speakers who generally advocate for decriminalization, discuss sex work or inform or educate others about it, or to intermediary platforms that provide forums for this speech. The court specified that FOSTA does not criminalize acts done

with the intent to engage in general advocacy about prostitution, or to give advice to sex workers generally to protect them from abuse. Nor would it cover the intent to preserve for historical purposes webpages that discuss prostitution.

But beyond this, the opinion is unclear.

FOSTA’s application to online intermediaries and others who assist specific adult, consensual sex workers remains unsettled. The court only hints at limiting “the prostitution of another person” to trafficking, referring to Congress’s intention to “eradicate the use of online platforms when they contribute to sex work that is compelled by ‘force, fraud, and coercion.’” The court did not go so far as to limit “the prostitution of another person” to non-consensual or non-adult work (as trafficking is defined). It is not certain, for example, whether providing a Safe John list to a specific, known sex worker would be excluded as “giving advice to sex workers generally” or if that would be prohibited—the word “generally” seems potentially limiting. And the law appears to still apply, for example, to one who provides web-hosting services to a known sex work business—though a prosecutor would have to show they had the “intent” to promote or facilitate that sex work business’s illegal prostitution.

Recent Supreme Court cases support an even more limited reading that would apply FOSTA only to those intermediaries who aid and abet a specific, discreet criminal act, rather than supporting an unlawful enterprise more broadly. In US v. Hansen, the case heavily relied upon by the DC Circuit in Woodhull, the court emphasized that aiding and abetting requires “an intent to bring about a particular unlawful act” that is then actually carried out. This specific intent is required even if it is not explicitly written into the statute. And in Taamneh v. Twitter, the court held that it was not civil aiding and abetting for intermediaries to knowingly allow terrorist organizations to use their platforms at arms-length, the way other users do:

“Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.”

According to the Supreme Court, online intermediaries do not aid and abet when they are generally aware of an overall unlawful enterprise; the intermediary must instead knowingly assist a “specific, wrongful act.” And they have to take some affirmative action with the intent of facilitating the commission of a specific offense—“encouraging, soliciting, or advising the commission of” an unlawful act.

Given all of this we hope that online forums for sex work do not remain closed, as have general use spaces, such as Craigslist’s Personal Services listings, which apparently carried too great of a risk of being used for sex work. But there remains some uncertainty and, unfortunately, the major platforms have shown no inclination to stick their necks out for sex workers.

Lastly, the court in Woodhull rejected plaintiffs’ argument that FOSTA’s retroactivity provision was an unconstitutional due process violation. The court said that that issue must be raised as a defense if one is prosecuted under the law for pre-FOSTA conduct.

While it is disappointing that FOSTA remains on the books, we appreciate that the court significantly limited its previously broad reach.

And this litigation also gave sex workers and sex work advocates a prominent forum in which to present evidence of why FOSTA, even if legally valid, remains an awful policy. The numerous amicus briefs filed throughout the case detailed FOSTA’s grave harms in forcing sex work back onto unsafe streets, impeding harm reduction and safety efforts, and thwarting law enforcement. These briefs, and the other conversations around this litigation, have helped to change the narrative around FOSTA, and have contributed to some overdue Congressional second-guessing.

We offer our heartful thanks to the brave plaintiffs who put their names on this case – Woodhull Freedom Foundation, Eric Koszyk, Alex Andrews, Human Rights Watch, and the Internet Archive, and to our dream team of co-counsel—Robert Corn-Revere, Lawrence Walters, Daphne Keller, Ronald London, Adam Sieff, Ceasar Kalinowski, and numerous other lawyers at Davis Wright Tremaine LLP and the Foundation for Individual Rights & Expression

Related Cases: Woodhull Freedom Foundation et al. v. United States
David Greene
Checked
55 minutes 33 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed