So-called “Consent Searches” Harm Our Digital Rights

2 days 20 hours ago

Imagine this scenario: You’re driving home. Police pull you over, allegedly for a traffic violation. After you provide your license and registration, the officer catches you off guard by asking: “Since you’ve got nothing to hide, you don’t mind unlocking your phone for me, do you?” Of course, you don’t want the officer to copy or rummage through all the private information on your phone. But they’ve got a badge and a gun, and you just want to go home. If you’re like most people, you grudgingly comply.

Police use this ploy, thousands of times every year, to evade the Fourth Amendment’s requirement that police obtain a warrant, based on a judge’s independent finding of probable cause of crime, before searching someone’s phone. These misleadingly named “consent searches” invade our digital privacy, disparately burden people of color, undermine judicial supervision of police searches, and rest on a legal fiction.

Legislatures and courts must act. In highly coercive settings, like traffic stops, police must be banned from conducting “consent searches” of our phones and similar devices.

In less-coercive settings, such “consent searches” must be strictly limited. Police must have reasonable suspicion that crime is afoot. They must collect and publish statistics about consent searches, to deter and detect racial profiling. The scope of consent must be narrowly construed. And police must tell people they can refuse.

Other kinds of invasive digital searches currently rest on “consent,” too. Schools use it to search the phones of minor students. Police also use it to access data from home internet of things (IoT) devices, like Amazon Ring doorbell cameras, that are streamlined for bulk police requests. Such “consent” requests must also be limited.

“Consent” Is a Legal Fiction

The “consent search” end-run around the warrant requirement rests on a legal fiction: that people who say “yes” to an officer’s demand for “consent” have actually consented. The doctrinal original sin is Schneckloth v. Bustamonte (1973), which held that “consent” alone is a legal basis for search, even if the person searched was not aware of their right to refuse. As Justice Thurgood Marshall explained in his dissent:

All the police must do is conduct what will inevitably be a charade of asking for consent. If they display any firmness at all, a verbal expression of assent will undoubtedly be forthcoming.

History has proven Justice Marshall right. Field data show that the overwhelming majority of people grant “consent.” For example, statistics on all traffic stops in Illinois, for 2015, 2016, 2017, and 2018, show that about 85% of white drivers and about 88% of minority drivers grant consent.

Lab data show the same. For example, a 2019 study in the Yale Law Journal, titled “The Voluntariness of Voluntary Consent,” asked each participant to unlock their phone for a search. Compliance rates were 97% and 90%, in two cohorts of about 100 people each.

The study separately asked other people whether a hypothetical reasonable person would agree to unlock their phone for a search. These participants were not themselves asked for consent. Some 86% and 88% of these two cohorts (again about 100 participants each) predicted that a reasonable person would refuse to grant consent. The authors observed that this “empathy gap” appears in many social psychology experiments on obedience. They warned that judges in the safety of their chambers may assume that motorists stopped by police feel free to refuse search requests--when motorists in fact don’t.

Why might people comply with search requests from police when they don’t want to? Many are not aware they can refuse. Many others reasonably fear the consequences of refusal, including longer detention, speeding tickets, or even further escalation, including physical violence. Further, many savvy officers use their word choice and tone to dance on the line between commands—which require objective suspicion—and requests—which don’t.

“Consent Searches” Are Widespread

In October 2020, Upturn published a watershed study about police searches of our phones, called “Mass Extraction.” It found that more than 2,000 law enforcement agencies, located in all 50 states, have purchased surveillance technology that can conduct “forensic” searches of our mobile devices. Further, police have used this tech hundreds of thousands of times to extract data from our phones.

The Upturn study also found that police based many of these searches on “consent.” For example, consent searches account for  38% of all cell phone searches in Anoka County, Minnesota; about one-third in Seattle, Washington; and 18% in Broward County, Florida.

Far more common are “manual” searches, where officers themselves scrutinize the data in our phones, without assistance from external software. For example, there was a ten-to-one ratio of manual searches to forensic searches by U.S. Customs and Border Protection in fiscal year 2017. Manual searches are just as threatening to our privacy. Police access virtually the same data (except some forensic searches recover “deleted” data or bypass encryption). Also, it is increasingly easy for police to use a phone’s built-in search tools to locate pertinent data. As with forensic searches, it is likely that a large portion of manual searches are by “consent.”

 “Consent Searches” Invade Privacy

Phone searches are extraordinarily invasive of privacy, as the U.S. Supreme Court explained in Riley v. California (2014). In that case, the Court held that an arrest alone does not absolve police of their ordinary duty to obtain a warrant before searching a phone. Quantitatively, our phones have “immense storage capacity,” including “millions of pages of text.” Qualitatively, they “collect in one place many distinct types of information – an address, a note, a prescription, a bank statement, a video – that reveal much more in combination than any isolated record.” Thus, phone searches “bear little resemblance” to searches of containers like bags that are “limited by physical realities.” Rather, phone searches reveal the “sum of an individual’s private life.”

“Consent Searches” Cause Racial Profiling

There is a greater risk of racial and other bias, intentional or implicit, when decision-makers have a high degree of subjective discretion, compared to when they are bounded by objective criteria. This occurs in all manner of contexts, including employment and law enforcement.

Whether to ask a person for “consent” to search is a high-discretion decision. The officer needs no suspicion at all and will almost always receive compliance.

Predictably, field data show racial profiling in “consent searches.” For example, the Illinois State Police (ISP) in 2019 were more than twice as likely to seek consent to search the cars of Latinx drivers compared to white drivers, yet more than 50% more likely to find contraband when searching the cars of white drivers compared to Latinx drivers. ISP data show similar racial disparities in other years.

When it comes to phone searches, it is highly likely that police likewise seek “consent” more often from people of color. Especially during our growing national understanding that Black Lives Matter, we must end police practices like these that unfairly burden people of color.

“Consent Searches” Undermine Judicial Review of Policing

Judges often examine warrantless searches by police. This occurs in criminal cases, if the accused moves to suppress evidence, and in civil cases, if a searched person sues the officer. Either way, the judge may analyze whether the officer had the requisite level of suspicion. This incentivizes officers to turn square corners while out in the field. And it helps ensure enforcement of the Fourth Amendment’s ban on “unreasonable” searches.

Police routinely evade this judicial oversight through the simple expedient of obtaining “consent” to search. The judge loses their power to investigate whether the officer had the requisite level of suspicion. Instead, the judge may only inquire whether “consent” was genuine.

Given all the problems discussed above, what should be done about police “consent searches” of our phones?

Ban “Consent Searches” in High-Coercion Settings

Legislatures and judges should bar police from searching a person’s phone or similar electronic devices based on consent when the person is in a high-coercion setting. This rule should apply during traffic stops, sidewalk detentions, home searches, station house arrests, and any other encounters with police where a reasonable person would not feel free to leave. This rule should apply to both manual and forensic searches.

Upturn’s 2020 study called for a ban on “consent searches” of mobile devices. It reasons that “the power and information asymmetries of cellphone consent searches are egregious and unfixable,” so “consent” is “essentially a legal fiction.” Further, consent searches are “the handmaiden of racial profiling.”

Civil rights advocates, such as the ACLU, have long demanded bans on “consent searches” of vehicles or persons during traffic stops. In 2003, the ACLU of Northern California settled a lawsuit with the California Highway Patrol, placing a three-year moratorium on consent searches. In 2010, the ACLU of Illinois petitioned the U.S. Department of Justice to ban the use of consent searches by the Illinois State Police. In 2018, the ACLU of Maryland supported a bill to ban them. Of course, searches of phones are even more privacy-invasive than searches of cars.

Strictly Limit “Consent Searches” in Less-Coercive Settings

Outside of high-coercion settings, some people may have a genuine interest in letting police inspect some of the data in their phones. An accused person might wish to present geolocation data showing they were far from the crime scene. A date rape survivor might wish to present a text message from their assailant showing the assailant’s state-of-mind. In less-coercive settings, consent to present such data is more likely to be truly voluntary.

Thus, it is not necessary to ban “consent searches” in less-coercive settings. But even in such settings, legislatures and courts must impose strict limits.

First, police must have reasonable suspicion that crime is afoot before conducting a consent search of a phone. Nearly two decades ago, the Supreme Courts of New Jersey and Minnesota imposed this limit on consent searches during traffic stops. So does a Rhode Island statute. This rule limits the subjective discretion of officers, and thus the risk of racial profiling. Further, it ensures courts may evaluate whether the officer had a criminal predicate before invading a person’s privacy.

Second, police must collect and publish statistics about consent searches of electronic devices, to deter and detect racial profiling. This is a common practice for police searches of pedestrians, motorists, and their effects. Then-State Senator Barack Obama helped pass the Illinois statute that requires this during traffic stops. California and many other states have similar laws.

Third, police and reviewing courts must narrowly construe the scope of a person’s consent to search their device. For example, if a person consents to a search of their recent text messages, a police officer must be barred from searching their older text messages, as well as their photos or social media. Otherwise, every consent search of a phone will turn into a free-ranging inquisition into all aspects of a person’s life. For the same reasons, EFF advocates for a narrow scope of device searches even pursuant to a warrant.

Fourth, before an officer searches a person’s phone by consent, the officer must notify the person of their legal right to refuse. The Rhode Island statute requires this warning, though only for youths. This is analogous to the famous Miranda warning about the right to remain silent.

Other Kinds of “Consent Searches”

Of course, consent searches by police of our phones are not the only kind of consent searches that threaten our digital rights.

For example, many public K-12 schools search students’ phones by “consent.” Indeed, some schools use forensic technology to do so. Given the inherent power imbalance between minor students and their adult teachers and principals, limits on consent searches by schools must be at least as privacy-protective as those presented above.

Also, some companies have built home internet of things (IoT) devices that facilitate bulk consent requests from police to residents. For example, Amazon Ring has built an integrated system of home doorbell cameras that enables local police, with the click of a mouse, to send residents a message requesting footage of passersby, neighbors, and themselves. Once police have the footage, they can use and share it with few limits. Ring-based consent requests may be less coercive than those during a home search. Still, the strict limits above must apply: reasonable suspicion, publication of aggregate statistics, narrow construction of the scope of consent, and notice of the right to withhold consent.

Adam Schwartz

It’s Business As Usual At WhatsApp

2 days 21 hours ago

WhatsApp users have recently started seeing a new pop-up screen requiring them to agree to its new terms and privacy policy in order to keep using the app. At first users were required to agree by February 8th, but after widespread controversy WhatsApp has announced it will delay that date to May 15.

The good news is that, overall, this update does not make any extreme changes to how WhatsApp shares data with its parent company Facebook. The bad news is that those extreme changes actually happened over four years ago, when WhatsApp updated its privacy policy in 2016 to allow for significantly more data sharing and ad targeting with Facebook. What's clear from the reaction to this most recent change is that WhatsApp shares much more information with Facebook than many users were aware, and has been doing so since 2016. And that’s not users’ fault: WhatsApp’s obfuscation and misdirection around what its various policies allow has put its users in a losing battle to understand what, exactly, is happening to their data.

This new terms of service and the privacy policy are one more step in Facebook's long-standing effort to monetize its messaging properties, and are also in line with its plans to make WhatsApp, Facebook Messenger, and Instagram Direct less separate. This brings serious privacy and competition concerns, including but not limited to WhatsApp's ability to share new information with Facebook about users' interactions with new shopping and payment products.

To be clear: WhatsApp still uses strong end-to-end encryption, and there is no reason to doubt the security of the contents of your messages on WhatsApp. The issue here is other data about you, your messages, and your use of the app. We still offer guides for WhatsApp (for iOS and Android) in our Surveillance Self-Defense resources, as well as for Signal (for iOS and Android).

Then and Now

This story really starts in 2016, when WhatsApp changed its privacy policy for the first time since its 2014 acquisition to allow Facebook access to several kinds of WhatsApp user data, including phone numbers and usage metadata (e.g. information about how long and how often you use the app, as well as your operating system, IP address, mobile network, etc.). Then, as now, public statements about the policy highlighted how this sharing would help WhatsApp users communicate with businesses and receive more "relevant" ads on Facebook.

At the time, WhatsApp gave users a limited option to opt out of the change. Specifically, users had 30 days after first seeing the 2016 privacy policy notice to opt out of “shar[ing] my WhatsApp account information with Facebook to improve my Facebook ads and product experiences.” The emphasis is ours; it meant that WhatsApp users were able to opt out of seeing visible changes to Facebook ads or Facebook friend recommendations, but could not opt out of the data collection and sharing itself.

If you were a WhatsApp user in August 2016 and opted out within the 30-day grace period, that choice will still be in effect. You can check by going to the “Account” section of your settings and selecting “Request account info.” The more than one billion users who have joined since then, however, did not have the option to refuse this expanded sharing of their data, and have been subject to the 2016 policy this entire time.

Now, WhatsApp is changing the terms again. The new terms and privacy policy are mainly concerned with how businesses on WhatsApp can store and host their communications. This is happening as WhatsApp plans to roll out new commerce tools in the app like Facebook Shops. Taken together, this renders the borders between WhatsApp and Facebook (and Facebook-owned Instagram) even more permeable and ambiguous. Information about WhatsApp users’ interactions with Shops will be available to Facebook, and can be used to target the ads you see on Facebook and Instagram. On top of the WhatsApp user data Facebook already has access to, this is one more category of information that can now be shared and used for ad targeting. And there’s still no meaningful way to opt-out.

So when WhatsApp says that its data sharing practices and policies haven’t changed, it is correct—and that’s exactly the problem. Those practices and policies have represented an erosion of Facebook’s and WhatsApp’s original promises to keep the apps separate for over four years now, and these new products mean the scope of data that WhatsApp has access to, and can share with Facebook, is only expanding. 

All of this looks different for users in the EU, who are protected by the EU’s General Data Protection Regulation, or GDPR. The GDPR prevents WhatsApp from simply passing on user data to Facebook without the permission of its users. As user consent must be freely given, voluntary, and unambiguous, the all-or-nothing consent framework that appeared to many WhatsApp users last week is not allowed. Tying consent for a performance of a service (in this case, private communication on WhatsApp) to additional data processing by Facebook (like shopping, payments, and data sharing for targeted advertising) violates the “coupling prohibition” under the GDPR.

The Problems with Messenger Monetization

Facebook has been looking to monetize its messaging properties for years. WhatsApp’s 2016 privacy policy change paved the way for Facebook to make money off it, and its recent announcements and changes point to a monetization strategy focused on commercial transactions that span WhatsApp, Facebook, and Instagram.

Offering a hub of services on top of core messaging functionality is not new—LINE and especially WeChat are two long-standing examples of “everything apps”—but it is a problem for privacy and competition, especially given WhatsApp's pledge to remain a “standalone” product from Facebook. Even more dangerously, this kind of mission creep might give those who would like to undermine secure communications another pretense to limit, or demand access to, those technologies.

With three major social media and messaging properties in its “family of companies”—WhatsApp, Facebook Messenger, and Instagram Direct—Facebook is positioned to blur the lines between various services with anticompetitive, user-unfriendly tactics. When WhatsApp bundles new Facebook commerce services around the core messaging function, it bundles the terms users must agree to as well. The message this sends to users is clear: regardless of what services you choose to interact with (and even regardless of whether or when those services are rolled out in your geography), you have to agree to all of it or you’re out of luck. We’ve addressed similar user choice issues around Instagram’s recent update.

After these new shopping and payment features, it wouldn’t be unreasonable to expect WhatsApp to drift toward even more data sharing for advertising and targeting purposes. After all, monetizing a messenger isn’t just about making it easier for you to find businesses; it's also about making it easier for businesses to find you.

Facebook is no stranger to building and then exploiting user trust. Part of WhatsApp’s immense value to Facebook was, and still is, its reputation for industry-leading privacy and security. We hope that doesn’t change any further.  

UPDATE 1/15/21: This post has been updated to reflect WhatsApp's announcement that it will delay the date by which users must agree to the new terms and privacy policy to May 15.

Gennie Gebhart

EFF Welcomes Fourth Amendment Defender Jumana Musa to Advisory Board

4 days 23 hours ago

Our Fourth Amendment rights are under attack in the digital age, and EFF is proud to announce that human rights attorney and racial justice activist Jumana Musa has joined our advisory board, bringing great expertise to our fight defending users’ privacy rights.

Musa is Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers (NACDL), where she oversees initiatives to challenge Fourth Amendment violations and outdated legal doctrines that have allowed the government and law enforcement to rummage, with little oversight or restrictions, through people’s private digital files.

The Fourth Amendment Center provides assistance and training for defense attorneys handling cases involving surveillance technologies like geofencing, Stingrays that track people’s digital locations, facial recognition, and more.

In a recent episode of EFF’s How to Fix the Internet podcast, Musa said an important goal in achieving privacy protections for users is to build case law to remove the “third party doctrine.” This is the judge-created legal tenant that metadata—names of people you called or called you, websites you visited, or your location—held by third parties like Internet providers, phone companies, or email services, isn’t private and therefore isn’t protected by the Fourth Amendment. Police are increasingly using spying tools in criminal investigations to gather metadata in whole communities or during protests, Musa said, a practice that disproportionately affects black and indigenous people, and communities of color.

Prior to joining NACDL, Ms. Musa was a policy consultant for the Southern Border Communities Coalition, comprised of over 60 groups across the southwest organized to help immigrants facing brutality and abuse by border enforcement agencies and support a human immigration agenda.

Previously, as Deputy Director for the Rights Working Group, a national coalition of civil rights, civil liberties, human rights, and immigrant rights advocates, Musa coordinated the “Face the Truth” campaign against racial profiling. She was also the Advocacy Director for Domestic Human Rights and International Justice at Amnesty International USA, where she addressed the domestic and international impact of U.S. counterterrorism efforts on human rights. She was one of the first human rights attorneys allowed to travel to the naval base at Guantanamo Bay, Cuba, and served as Amnesty International's legal observer at military commission proceedings on the base. 

Welcome to EFF, Jumana!

Karen Gullo

Face Surveillance and the Capitol Attack

5 days 1 hour ago

After last week’s violent attack on the Capitol, law enforcement is working overtime to identify the perpetrators. This is critical to accountability for the attempted insurrection. Law enforcement has many, many tools at their disposal to do this, especially given the very public nature of most of the organizing. But we object to one method reportedly being used to determine who was involved: law enforcement using facial recognition technologies to compare photos of unidentified individuals from the Capitol attack to databases of photos of known individuals. There are just too many risks and problems in this approach, both technically and legally, to justify its use. 

Government use of facial recognition crosses a bright red line, and we should not normalize its use, even during a national tragedy.

EFF Opposes Government Use of Face Recognition

Make no mistake: the attack on the Capitol can and should be investigated by law enforcement. The attackers’ use of public social media to both prepare and document their actions will make the job easier than it otherwise might be.  

But a ban on all government use of face recognition, including its use by law enforcement, remains a necessary precaution to protect us from this dangerous and easily misused technology. This includes a ban on government’s use of information obtained by other government actors and by third-party services through face recognition.

One such service is Clearview AI, which allows law enforcement officers to upload a photo of an unidentified person and, allegedly, get back publicly-posted photos of that person. Clearview has reportedly seen a huge increase in usage since the attack. Yet the faceprints in Clearview’s database were collected, without consent, from millions of unsuspecting users across the web, from places like Facebook, YouTube, and Venmo, along with links to where those photos were posted on the Internet. This means that police are comparing images of the rioters to those of many millions of individuals who were never involved—probably including yours. 

EFF opposes law enforcement use of Clearview, and has filed an amicus brief against it in a suit brought by the ACLU. The suit correctly alleges the company’s faceprinting without consent violates the Illinois Biometric Information Privacy Act (BIPA). 

Separately, police tracking down the Capitol attackers are likely using government-controlled databases, such as those maintained by state DMVs, for face recognition purposes. We also oppose this use of face recognition technology, which matches images collected during nearly universal practices like applying for a driver’s license. Most individuals require government-issued identification or a license but have no ability to opt out of such face surveillance. 

Face Recognition Impacts Everyone, Not Only Those Charged With Crimes 

The number of people affected by government use of face recognition is staggering: from DMV databases alone, roughly two-thirds of the population of the U.S. is at risk of image surveillance and misidentification, with no choice to opt out. Further, Clearview has extracted faceprints from over 3 billion people. This is not a question of “what happens if face recognition is used against you?” It is a question of how many times law enforcement has already done so. 

For many of the same reasons, EFF also opposes government identification of those at the Capitol by means of dragnet searches of cell phone records of everyone present. Such searches have many problems, from the fact that users are often not actually where records indicate they are, to this tactic’s history of falsely implicating innocent people. The Fourth Amendment was written specifically to prevent these kinds of overbroad searches.

Government Use of Facial Recognition Would Chill Protected Protest Activity

Facial surveillance technology allows police to track people not only after the fact but also in real time, including at lawful political protests. Police repeatedly used this same technology to arrest people who participated in last year’s Black Lives Matter protests. Its normalization and widespread use by the government would fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment-protected rights to speak, peacefully assemble, and associate with others. 

Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden historically falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

Face surveillance technology is also prone to error and has already implicated multiple people for crimes they did not commit

Government use of facial recognition crosses a bright red line, and we should not normalize its use, even during a national tragedy. In responding to this unprecedented event, we must thoughtfully consider not just the unexpected ramifications that any new legislation could have, but the hazards posed by surveillance techniques like facial recognition. This technology poses a profound threat to personal privacy, racial justice, political and religious expression, and the fundamental freedom to go about our lives without having our movements and associations covertly monitored and analyzed.


Jason Kelley

Beyond Platforms: Private Censorship, Parler, and the Stack

5 days 21 hours ago

Last week, following riots that saw supporters of President Trump breach and sack parts of the Capitol building, Facebook and Twitter made the decision to give the president the boot. That was notable enough, given that both companies had previously treated the president, like other political leaders, as largely exempt from content moderation rules. Many of the president’s followers responded by moving to an alternative platform, Parler. This week, the response has taken a new turn by targeting. Infrastructure companies much closer to the bottom of the technical “stack” including Amazon Web Services (AWS), and Google’s Android and Apple’s iOS app storesdecided to cut off service to that alternative platform -- i.e., not just to an individual but to an entire site. Parler has so far struggled to return online, partly through errors of its own making, but also because the lower down the technical stack, the harder it is to find alternatives, or re-implement what capabilities the Internet has taken for granted.

Whatever you think of Parler, these decisions should give you pause. Private companies have strong legal rights under U.S. law to refuse to host or support speech they don’t like. But that refusal carries different risks when a group of companies comes together to ensure that forums for speech or speakers are effectively taken offline altogether.

The Free Speech Stack—aka “Free Speech Chokepoints”

To see the implications of censorship choices by deeper stack companies, let’s back up for a minute. As researcher Joan Donovan puts it,“At every level of the tech stack, corporations are placed in positions to make value judgments regarding the legitimacy of content, including who should have access, and when and how.” And the decisions made by companies at varying layers of the stack are bound to have different impacts on free expression.

At the top of the stack are services like Facebook, Reddit, or Twitter, platforms whose decisions about who to serve (or what to allow) are comparatively visible, though still far too opaque to most users.  Their responses can be comparatively targeted to specific users and content and, most importantly, do not cut off as many alternatives. For instance, a discussion forum lies close to the top of the stack: if you are booted from such a platform, there are other venues in which you can exercise your speech. These are the sites and services that all users (both content creators and content consumers) interact with most directly. They are also the places people think of when they think of the content itself (i.e.,“I saw it on Facebook”). Users are often required to have individual accounts or advantaged if they do. Users may also specifically seek out the sites for their content. The closer to the user end, the more likely it is that sites will have more developed and apparent curatorial and editorial policies and practicestheir "signature styles." Finally, users typically have an avenue, flawed as it may be, to communicate directly with the service. 

At the other end of the stack are internet service providers (ISPs), like Comcast or AT&T. Decisions made by companies at this layer of the stack to remove content or users raise greater concerns for free expression, especially when there are few if any competitors. For example, it would be very concerning if the only broadband provider in your area cut you off because they didn’t like what you said onlineor what someone else whose name is on the account said. The adage “if you don’t like the rules, go elsewhere” doesn’t work when there is nowhere else to go.

In between are a wide array of intermediaries, such as upstream hosts like AWS, domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services. EFF has a handy chart of some of those key links between speakers and their audience here. These intermediaries provide the infrastructure for speech and commerce, but many have only the most tangential relationship to their users. Faced with a complaint, takedown will be much easier and cheaper than a nuanced analysis of a given user’s speech, much less an analysis of the speech that might be hosted by a company that is a user of their services. So these service are more likely to simply cut a user or platform off than do a deeper review. At the same time, in many cases both speakers and audiences will not be aware of the identities of these support services and, even if they are, have no independent relationship with them. These services are thus not commonly associated with the speech that passes through them and have no "signature style" to enforce.

Infrastructure Takedowns Are Equally If Not More Likely to Silence Marginalized Voices

We saw a particularly egregious example of an infrastructure takedown just a few months ago, when Zoom made the decision to block a San Francisco State University online academic event featuring prominent activists from Black and South African liberation movements, the advocacy group Jewish Voice for Peace, and controversial figure Leila Khaled—inspiring Facebook and YouTube to follow suit. The decision, which Zoom justified on the basis of Khaled’s alleged ties to a U.S.-designated foreign terrorist organization, was apparently made following external pressure.

Although we have numerous concerns with the manner in which social media platforms like Facebook, YouTube, and Twitter make decisions about speech, we viewed Zoom’s decision differently. Companies like Facebook and YouTube, for good or ill, include content moderation as part of the service they provide. Since the beginning of the pandemic in particular, however, Zoom has been used around the world more like a phone company than a platform. And just as you don’t expect your phone company to start making decisions about who you can call, you don’t expect your conferencing service to start making decisions about who can join your meeting.

Just as you don’t expect your phone company to start making decisions about who you can call, you don’t expect your conferencing service to start making decisions about who can join your meeting.

It is precisely this reason that Amazon’s ad-hoc decision to cut off hosting to social media alternative Parler, in the face of public pressure, should be of concern to anyone worried about how decisions about speech are made in the long run. In some ways, the ejection of Parler is neither a novel, nor a surprising development. First, it is by no means the first instance of moderation at this level of the stack. Prior examples include Amazon denying service to WikiLeaks and the entire nation of Iran. Second, the domestic pressure on companies like Amazon to disentangle themselves from Parler was intense. After all, in the days leading up to its removal by Amazon, Parler played host to outrageously violent threats against elected politicians from its verified users, including lawyer L. Lin Wood.

But infrastructure takedowns nonetheless represent a significant departure from the expectations of most users. For one thing, they are cumulative, since all speech on the Internet relies upon multiple infrastructure hosts.  If users have to worry about satisfying not only their host’s terms and conditions but also those of every service in the chain from speaker to audience—even though the actual speaker may not even be aware of all of those services or where they draw the line between hateful and non-hateful speech—many users will simply avoid sharing controversial opinions altogether. They are also less precise. In the past, we’ve seen entire large websites darkened by upstream hosts because of a complaint about a single document posted. More broadly, infrastructure level takedowns move us further toward a thoroughly locked-down, highly monitored web, from which a speaker can be effectively ejected at any time.

Going forward, we are likely to see more cases that look like Zoom’s censorship of an academic panel than we are Amazon cutting off another Parler. Nevertheless, Amazon’s decision highlights core questions of our time: Who should decide what is acceptable speech, and to what degree should companies at the infrastructure layer play a role in censorship?

At EFF, we think the answer is both simple and challenging: wherever possible, users should decide for themselves, and companies at the infrastructure layer should stay well out of it. The firmest, most consistent, approach infrastructure chokepoints can take is to simply refuse to be chokepoints at all. They should act to defend their role as a conduit, rather than a publisher. Just as law and custom developed a norm that we might sue a publisher for defamation, but not the owner of the building the publisher occupies, we are slowly developing norms about responsibility for content online. Companies like Zoom and Amazon have an opportunity to shape those norms—for the better or for the worse.

Internet Policy and Practice Should Be User-Driven, Not Crisis-Driven

It’s easy to say today, in a moment of crisis, that a service like Parler should be shunned. After all, people are using it to organize attacks on the U.S. Capitol and on Congressional leaders, with an expressed goal to undermine the democratic process. But when the crisis has passed, pressure on basic infrastructure, as a tactic, will be re-used, inevitably, against unjustly marginalized speakers and forums. This is not a slippery slope, nor a tentative prediction—we have already seen this happen to groups and communities that have far less power and resources than the President of the United States and the backers of his cause. And this facility for broad censorship will not be lost on foreign governments who wish to silence legitimate dissent either. Now that the world has been reminded that infrastructure can be commandeered to make decisions to control speech, calls for it will increase, and principled objections may fall to the wayside. 

Over the coming weeks, we can expect to see more decisions like these from companies at all layers of the stack. Just today, Facebook removed members of the Ugandan government in advance of Tuesday’s elections in the country, out of concerns for election manipulation. Some of the decisions that these companies make may be well-researched, while others will undoubtedly come as the result of external pressure and at the expense of marginalized groups.

The core problem remains: regardless of whether we agree with an individual decision, these decisions overall have not and will not be made democratically and in line with the requirements of transparency and due process. Instead they are made by a handful of individuals, in a handful of companies, the most distanced and least visible to the most Internet users. Whether you agree with those decisions or not, you will not be a part of them, nor be privy to their considerations. And unless we dismantle the increasingly centralized chokepoints in our global digital infrastructure, we can anticipate an escalating political battle between political factions and nation states to seize control of their powers.

Jillian C. York

The FCC and States Must Ban Digital Redlining

5 days 22 hours ago

The rollout of fiber broadband will never make it to many communities in the US. That’s because large, national ISPs are currently laying fiber primarily focused on high-income users to the detriment of the rest of their users. The absence of regulators has created a situation where wealthy end users are getting fiber, but predominantly low-income users are not being transitioned off legacy infrastructure. The result being “digital redlining” of broadband, where wealthy broadband users are getting the benefits of cheaper and faster Internet access through fiber, and low-income broadband users are being left behind with more expensive slow access by that same carrier. We have seen this type of economic discrimination in the past in other venues such as housing, and it is happening now with 21st-century broadband access. 

It doesn’t have to be this way. Federal, state, and local governments have a clear role in promoting anti-discrimination deployment and historically have enforced rules to prevent unjust discrimination. States and local governments have power through franchise authority to prohibit unjust discrimination through build-out requirements. In fact, it is already illegal in California to discriminate based on income status, as EFF noted in its comments to the state’s regulator. And cities that hold direct authority over ISPs can require non-discrimination like New York City just did when it required Verizon to deploy 500,000 more fiber connections last year to low-income users.

That’s why dozens of organizations have asked the incoming Biden FCC to directly confront digital redlining after the FCC reverses the Trump era deregulation of broadband providers and restore their common carriage obligations. For the last three years, the FCC had abandoned its authority to address these systemic inequalities causing it to sit out the pandemic at a time when dependence on broadband is sky-high. It is time to treat broadband as important as water and electricity and ensure that as a matter of law everyone gets the access they deserve.

What the Data Is Showing Us on Fiber in Cities

A great number of people in cities that can be served fiber in a commercially feasible manner (that is, you can build it and make a profit without government subsidies) are still on copper DSL networks. Studies of major metropolitan areas such as Oakland and Los Angeles County are showing systemic discrimination against low-income users in fiber deployment despite high population density, and because income can often serve as a proxy for race, this falls particularly hard on neighborhoods of color.

Other studies conducted by the Communications Workers of America and National Digital Inclusion Alliance have found that this digital redlining is systemic across AT&T’s footprint with only 1/3 of AT&T wireline customers connected to its fiber. In fact, not only is AT&T not deploying fiber to all of their customers over time; it is now in the process of preparing to disconnect its copper DSL customers and leaving them no other choice than an unreliable mobile connection.

There are no good reasons for this discrimination to continue. For example, Oakland has an estimated 7000+ people per square mile, which is far above sufficient density to finance at least one city-wide fiber network. Tightly packed populations are ideal for broadband providers because they have to invest less in infrastructure to reach a large number of paying customers: 7,000 users per square mile is far more than a provider needs to pay for fiber. Chattanooga, which currently has its fiber deployed by the local government, has a population density of only 1222 people per square mile. Since the Chattanooga government ISP publicly reports its finances in great detail, we can see its extremely rosy numbers (chart below) with a fraction of the density of Oakland and many other underserved cities. In fact, rural cooperatives are doing gigabit fiber at 2.4 people per square mile

EFF assembled this chart based on publicly reported data by EPB available at the following link (

In other words, there are no good reasons for this discrimination. If governments require carriers to deploy fiber in a non-discriminatory way, they will still make a profit. The question really boils down to whether we are going to allow incrementally higher profits from discrimination to continue despite historically enforcing laws against such practices.

There Are Concrete Ramifications for Broadband Affordability If We Do Not Resolve Digital Redlining of Fiber in Major Cities

The pandemic has shown us that broadband is not equally accessible at affordable prices even in our major cities. It wasn’t a rural part of America that had those little girls do their homework in a fast-food parking lot (picture below) that caught media attention, it was in Salinas, California, with a population density of 6,490 people per square mile.

Photo was taken at a Taco Bell in Salinas, California (


Those kids probably had some basic Internet access, but it was likely too expensive and too slow to handle remote education. And that should come as no surprise given that a recent comprehensive study by the Open Technology Institute has found that the United States has on average the most expensive slowest Internet among modern economies

The lack of ubiquitous fiber infrastructure limits the government’s ability to support efforts to deliver access to those of limited income. When you don’t have fiber in those neighborhoods, all you can do is what the city of Salinas did, which was pay for an expensive, slow mobile hotspot. Meanwhile, Chattanooga is able to give 100/100 mbps broadband access to all of its low-income families for free for 10 years for around $8 million. Since the fiber is already built and connected throughout the city, and because it is very cheap to add people to the fiber network once built, it only cost the city an average of $2-$3 per month per child to give 28,000 kids free fast Internet at cost (that is, without making a profit). If we want to make free fast Internet a reality, we need the infrastructure that can keep costs sufficiently low to realistically deliver. 

The massive discrepancy between fiber and non-fiber is due to the fact that the older networks are getting more expensive to run and can’t cheaply add a bunch of new users for higher speed needs. No amount of subsidy will change the physical limitations of those networks on top of the fact that they are getting more expensive to maintain due to their obsolescence. 

The future of all things wireless and wireline in broadband is running through fiber infrastructure. It is a universal medium that is unifying the 21st century Internet because it has the ability to scale up capacity far ahead of expected growth in demand in a cost-effective way. It has orders of magnitude greater potential and capacity than any other wireline or wireless medium that transmits data. Our technical analysis concluded that the 21st century Internet is one where all Americans are connected to fiber and are actively supporting efforts in DC to pass a universal fiber plan and well as efforts in states like California. But for major cities, the lack of ubiquitous fiber is not due to the lack of government spending, it is from the lack of regulatory enforcement of non-discrimination.


Ernesto Falcon

The Government Has All of the Powers It Needs to Find and Prosecute Those Responsible for the Crimes on Capitol Hill This Week

6 days 4 hours ago

Perpetrators of the horrific events that took place at the Capitol on January 6 had a clear goal: to undermine the legitimate operations of government, to disrupt the peaceful transition of power, and to intimidate, hurt, and possibly kill those political leaders that disagree with their worldview. 

These are all crimes that can and and should be investigated by law enforcement. Yet history provides the clear lesson that immediate legislative responses to an unprecedented national crime or a deeply traumatic incident can have profound, unforeseen, and often unconstitutional consequences for decades to come. Innocent people—international travelers, immigrants, asylum seekers, activists, journalists, attorneys, and everyday Internet users—have spent the last two decades contending with the loss of privacy, government harassment, and exaggerated sentencing that came along with the PATRIOT Act and other laws passed in the wake of national tragedies.   

Law enforcement does not need additional powers, new laws, harsher sentencing mandates, or looser restrictions on the use of surveillance measures to investigate and prosecute those responsible for the dangerous crimes that occurred. Moreover, we know from experience that any such new powers will inevitably be used to target the most vulnerable members of society or be used indiscriminately to surveil the broader public. 

EFF has spent the last three decades pushing back against overbroad government powers—in courts, in Congress, and in state legislatures—to demand an end to unconstitutional surveillance and to warn against the dangers of technology being abused to invade people’s rights. To take just a few present examples: we continue to fight against the NSA’s mass surveillance programs, the exponential increase of surveillance power used by immigration enforcement agencies at the U.S. border and in the interior, and the inevitable creep of military surveillance into everyday law enforcement to this day. The fact that we are still fighting these battles shows just how hard it is to end unconstitutional overreactions.

Policymakers must learn from those mistakes. 

First, Congress and state lawmakers should not even consider any new laws or deploying any new surveillance technology without first explaining why law enforcement’s vast, existing powers are insufficient. This is particularly true given that it appears that the perpetrators of the January 6 violence planned, organized, and executed their acts in the open, and that much of the evidence of their crimes is publicly available.

Second, lawmakers must understand that any new laws or technology will likely be turned on vulnerable communities as well as those who vocally and peacefully call for social change. Elected leaders should not exacerbate ongoing injustice via new laws or surveillance technology. Instead, they must work to correct the government’s past abuse of powers to target dissenting and marginalized voices.   

As Representative Barbara Lee said in 2001, “Our country is in a state of mourning. Some of us must say, let’s step back for a moment . . . and think through the implications of our actions today so that this does not spiral out of control.” Surveillance, policing, and exaggerated sentencing—no matter their intention—always end up being wielded against the most vulnerable members of society. We urge President-elect Joe Biden to pause and to not let this moment contribute to that already startling inequity in our society. 

Matthew Guariglia

YouTube and TikTok Put Human Rights In Jeopardy in Turkey

1 week ago

Democracy in Turkey is in a deep crisis. Its ruling party, led by Recep Tayyip Erdoğan, systematically silences marginalized voices, shuts down dissident TV channels, sentences journalists, and disregards the European Court of Human Rights decisions. As we wrote in November, in this oppressive atmosphere, Turkey’s new Social Media Law has doubled down on previous online censorship measures by requiring sites to appoint a local representative who can be served with content removal demands and data localization mandates. This company representative would also be responsible for maintaining the fast response turnaround times to government requests required by the law.

The pushback against the requirements of the Social Media Law was initially strong. Facebook, Instagram, Twitter, Periscope, YouTube, and TikTok had not appointed representatives when the law was first introduced late last year. But now, two powerful platforms have capitulated, despite the law’s explicit threat to users’ fundamental rights: The first one, YouTube, announced on December 16th, followed by TikTok last Friday and DailyMotion just today, January 9th. These decisions creates a bad precedent that will make it harder for other companies to fight back.

YouTube and TikTok now plan to set up a “legal entity” in Turkey, providing a local government point of contact. Even though both announcements promise that the platforms will not change their content review or data handling or holding practices, it is not clear how YouTube or TikTok will challenge or stand against the Turkish government once they agree to set up legal shops on Turkish soil. The move by YouTube (and the lack of transparency around its decision) is particularly disappointing given the importance of the platform for political speech and over a decade of attempts to control YouTube content by the Turkish government. 

The Turkish administration and courts have long attempted to punish sites like YouTube and Twitter who do not comply with its takedown orders to their satisfaction. With a local legal presence, government officials can not only throttle or block sites; they could force platforms to arbitrarily remove perfect legal, political speech or disclose political activists’ data or force them to be complicit in a government-sanctioned human rights violation. Arbitrary political arrests and detentions are increasingly common inside the country, from information security professionals to journalists, doctors and lawyers. A local employee of an Internet company in such a hostile environment could, quite literally, be a hostage to government interests.

Reacting to TikTok Friday’s news, Yaman Akdeniz, one of the founders of the Turkish Freedom of Expression Association, told EFF: 

“TikTok is completely misguided about Turkey Internet-related restrictions and government demands. The company can become part of the problem and can become complicit in human rights violations in Turkey.”

Chilling Effects on Freedom of Expression 

Turkey’s government has been working to create ways to control foreign Internet sites and services for many years. Under the new Social Media Law, failure to appoint a representative leads to stiff fines, an advertisement ban, and throttling of the provider’s bandwidth. According to the law, the Turkish Information and Communication Technologies Authority (Bilgi Teknolojileri ve İletişim Kurumu or BTK) can issue a five-phase set of fines. BTK has already sanctioned social media platforms that did not appoint local representatives by imposing two initial sets of fines, on November 4 and December 11, of TRY10 million ($1.3 million) and TRY30 million ($4 million) respectively. Facing these fines, YouTube, and now TikTok, blinked. 

If platforms do not appoint a representative by January 17, 2021, BTK can prohibit Turkish taxpayers from placing ads on and making payments to a provider’s platform if they do not have a Turkey-based representative. If the provider continues to refuse to appoint a representative until April 2021, the BTK can apply to a Criminal Judgeship of Peace to throttle the provider’s bandwidth initially by 50%. If, after that, the provider hasn’t still appointed a representative until May 2021, BTK can apply for a further bandwidth reduction; this time, the judgeship can decide to throttle the provider’s bandwidth anywhere between 50%-90%. 

The Turkish government should refrain from imposing disproportionate sanctions on platforms given their significant chilling effect on freedom of expression. Moreover, throttling, which means that locals in Turkey do not have access to social media sites, is effectively a technical ban to access such sites and services—an inherently disproportionate measure. 

Human Rights Groups Fight Back

EFF stands with Turkish Freedom of Expression Association (Tr. İfade Özgürlüğü Derneği), Human Rights Watch, and Article 19 in their protest against YouTube. In a joint letter, they urge YouTube to reverse its decision and stand firm against the Turkish governments’ pressure. The letter urgently asks YouTube to clarify how the company intends to respect the rights to freedom of expression and privacy of their users in Turkey; and if they can publish the company’s Human Rights Impact Assessment that led to the decision to appoint a representative office, which can be served with content take-down notifications. 

YouTube, a Google subsidiary, has a corporate responsibility to uphold freedom of expression as guided by the UN Guiding Principles on Business and Human Rights, a global standard of “expected conduct for all business enterprises wherever they operate.” The Principles exist independently of States’ willingness to fulfill their human rights obligations and do not diminish such commitments. And it exists over and above compliance with national laws and regulations protecting human rights.

YouTube’s Newly Precarious Position

According to YouTube’s Community Guidelines, legal content is not removed unless it violates the site rules. Moreover, content is removed within a country only if it violates the laws of that country, as determined by YouTube’s lawyers. The Transparency Report for Turkey shows that Google did not take any action concerning government takedown requests for 46.6% of Turkey’s cases.

Those declined orders demonstrate how overbroad and politicized Turkey’s takedown process has become, and how important YouTube’s freedom to challenge such orders has been. In one of the requests, the Turkish government requested a takedown of videos where officials attack Syrian refugees who try to cross the Turkey-Greece border. YouTube only removed 1 out of 34 videos because only one video violated its community rules. In another instance, YouTube received a BTK request, and then a court order, to remove 84 videos that criticized high-level government officials. YouTube blocked access to seven videos, 16 videos were erased by users, and 61 videos remained on site. Another example shows that YouTube did not take down 242 videos allegedly related to an individual affiliated with law enforcement. 

With a local representative in place, YouTube will find it much harder to resist arbitrary orders, nor will it respect its responsibilities as part of its membership of the Global Network Initiative and under international human rights law.

Social media companies must uphold international human rights law when it conflicts with local laws. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms, not domestic laws. Likewise, the UN Guiding Principles on Business and Human Rights provide that companies respect human rights and avoid contributing to human rights violations. This becomes especially important in countries where democracy is most fragile. The Global Network Initiative’s implementation guidelines were written to cover cases where its corporate members operate in countries where local law stands in conflict with human rights. YouTube has given no public indication of how it would seek to match its GNI commitments given its changed relation with Turkey.

Tech companies have come under increasing criticism for decisions to flout and ignore local laws or treat non-U.S. countries with attitudes that lack understanding of the local context. In many cases, local compliance and representation by powerful multinational companies can be a positive step. 

But Turkey is not one of those cases. Its ruling party has undermined democratic pluralism, an independent judiciary, and separation of powers in recent years. Lack of checks and balances creates an oppressive atmosphere and results in the total absence of due process. Compliance with local Turkish law can potentially mean becoming the arm of an increasingly totalitarian State and complicity with its human rights violations. 

Arbitrary Blocking and Content Removal

EngelliWeb (Eng. BlockedWeb), a comprehensive project initiated by the Turkish Freedom of Expression Association, aims at keeping statistical records of censored content and reporting about it. In one instance, EngelliWeb reported that one of their stories (reporting the blocking of another site) became subject to a court order requesting the blocking and content removal of such news. The Association has recently announced they would object to the court decision. Another access blocking decision is striking because the same judge who decided in favor of the defendant in a defamation lawsuit ordered blocking access to the news story related to the same lawsuit on the grounds of “violation of personal rights.” These examples demonstrate there is no justified reason, much less a legal one, to censor such news in Turkey. 

According to Yaman Akdeniz, an academic and one of the founders of the Turkish Freedom of Expression Association:

“Turkish judges issue approximately 12,000 blocking and removal decisions each year, and over 450,000 websites and 140,000 URL are currently blocked from Turkey according to our EngelliWeb research. In YouTube’s case, access to over 10,000 YouTube videos is currently blocked from Turkey. In the absence of due process and independent judiciary, almost all appeals involving such decisions are rejected by same level judges without proper legal scrutiny. In the absence of due process, YouTube and any other social media platform provider willing to come to Turkey, risk becoming the long arm of the Turkish judiciary. 

...the Constitutional Court has become part of the problems associated with the judiciary and does not swiftly decide individual applications involving Internet-related blocking and removal decisions. Even when the Constitutional Court finds a violation as in the cases of Wikipedia, Sendika.Org, and others, the lower courts constantly ignore the decisions of the Constitutional Court which then diminish substantially the impact of such decisions.”

Social media companies should not give in to this pressure. If social media companies comply with the law, then the Turkish authoritarian government wins without a fight. YouTube and TikTok should not lead the retreat.

Katitza Rodriguez

California City’s Effort to Punish Journalists For Publishing Documents Widely Available Online is Dangerous and Chilling, EFF Brief Argues

1 week 1 day ago

As part of their jobs, journalists routinely dig through government websites to find newsworthy documents and share them with the broader public. Journalists and Internet users understand that publicly available information on government websites is not secret and that, if government officials want to protect information from being disclosed online, they shouldn’t publicly post it on the Internet.

But  a California city is ignoring these norms and trying to punish several journalists for doing their jobs. The city of Fullerton claims that the journalists, who write for a digital publication called Friends for Fullerton’s Future, violated federal and state computer crime laws by accessing documents publicly available to any Internet user. Not only is the civil suit by the city a transparent attempt to cover up its own poor Internet security practices, it also threatens to chill valuable and important journalism. That’s why EFF, along with the ACLU and ACLU of Southern California, filed a friend-of-the-court brief in a California appellate court this week in support of the journalists.

The city sued two journalists and Friends for Fullerton’s Future based on several claims, including an allegation that they violated California’s Comprehensive Computer Data and Fraud Act when they obtained and published documents officials posted to a city file-sharing website that was available to anyone with an Internet connection. For months, the city made the file-sharing site available to the public without a password or any other access restrictions and used it to conduct city business, including providing records to members of the public who requested them under the California Public Records Act.

Even though they took no steps to limit public access to the city’s file sharing site, officials nonetheless objected when the journalists published publicly available documents that officials believed should not have been public or the subject of news stories. And instead of taking steps to ensure the public did not have access to sensitive government documents, the city is trying to stretch the California computer crime law, known as Section 502, to punish the journalists. 

EFF’s amicus brief argues that the city’s interpretation of California’s Section 502, which was intended to criminalize malicious computer intrusions and is similar to the federal Computer Fraud and Abuse Act, is wrong as a legal matter and that it threatens to chill the public’s constitutionally protected right to publish information about government affairs.

The City contends that journalists act “without permission,” and thus commit a crime under Section 502, by accessing a particular City controlled URL and downloading documents stored there—notwithstanding the fact that the URL is in regular use in City business and has been disseminated to the general public. The City claims that an individual may access a publicly available URL, and download documents stored in a publicly accessible account, only if the City specifically provides that URL in an email addressed to that particular person. But that interpretation of “permission” produces absurd—and dangerous—results: the City could choose arbitrarily to make a criminal of many visitors to its website, simply by claiming that it had not provided the requisite permission-email to the Visitor.

The city’s interpretation of Section 502 also directly conflicts with “the longstanding open-access norms of the Internet,” the brief argues. Because Internet users understand that they have permission to access information posted publicly on the Internet, the city must take affirmative steps to restrict access via technical barriers before it can claim a Section 502 violation.

The city’s broad interpretation of Section 502 is also dangerous because, if accepted, it would threaten a great deal of valuable journalism protected by the First Amendment.

The City’s interpretation would permit public officials to decide—after making records publicly available online (through their own fault or otherwise)—that accessing those records was illegal. Under the City’s theory, it can retroactively revoke generalized permission to access publicly available documents as to a single individual or group of users once it changes its mind or is simply embarrassed by the documents’ publication. The City could then leverage that revocation of permission into a violation of Section 502 and pursue both civil and criminal liability against the parties who accessed the materials.

Moreover, the “City’s broad reading of Section 502 would chill socially valuable research, journalism, and online security and anti-discrimination testing—activity squarely protected by the First Amendment,” the brief argues. The city’s interpretation of Section 502 would jeopardize important investigative reporting techniques that in the past have uncovered illegal employment and housing discrimination.

Finally, EFF’s brief argues that the city’s interpretation of Section 502 violates the U.S. Constitution’s due process protections because it would fail to give Internet users adequate notice that they were committing a crime while simultaneously giving government officials vast discretion to decide when to enforce the law against Internet users. 

The City proposes that journalists perusing a website used to disclose public records must guess whether particular documents are intended for them or not, intuit the City’s intentions in posting those documents, and then politely look the other way—or be criminally liable. This scheme results in unclear, subjective, and after-the-fact determinations based on the whims of public officials. Effectively, the public would have to engage in mind reading to know whether officials approve of their access or subsequent use of the documents from the City’s website.

The court should reject the city’s arguments and ensure that Section 502 is not abused to retaliate against journalists, particularly because the city is seeking to punish these reporters for its own computer security shortcomings. Publishing government records available to every Internet user is good journalism, not a crime, and using computer crime laws to punish journalists for obtaining documents available to every Internet user is dangerous—and unconstitutional.

Aaron Mackey

ACLU, EFF, and Tarver Law Offices Urge Supreme Court to Protect Against Forced Disclosure of Phone Passwords to Law Enforcement

1 week 2 days ago
Does the Fifth Amendment Protect You from Revealing Your Passwords to Police?

Washington, D.C. - The American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF), along with New Jersey-based Tarver Law Offices, are urging the U.S. Supreme Court to ensure the Fifth Amendment protection against self-incrimination extends to the digital age by prohibiting law enforcement from forcing individuals to disclose their phone and computer passcodes.

“The Fifth Amendment protects us from being forced to give police a combination to a wall safe. That same protection should extend to our phone and computer passwords, which can give access to far more sensitive information than any wall safe could,” said Jennifer Granick, ACLU surveillance and cybersecurity counsel. “The Supreme Court should take this case to ensure our constitutional rights survive in the digital age.”

In a petition filed Thursday and first reported by The Wall Street Journal, the ACLU and EFF are asking the U.S. Supreme Court to hear Andrews v. New Jersey. In this case, a prosecutor obtained a court order requiring Mr. Robert Andrews to disclose passwords to two cell phones. Mr. Andrews fought the order, citing his Fifth Amendment privilege. Ultimately, the New Jersey State Supreme Court held that the privilege did not apply to the disclosure or use of the passwords.

“There are few things in constitutional law more sacred than the Fifth Amendment privilege against self-incrimination,” said Mr. Andrews’ attorney, Robert L. Tarver, Jr. “Up to now, our thoughts and the content of our minds have been protected from government intrusion. The recent decision of the New Jersey Supreme Court highlights the need for the Supreme Court to solidify those protections.”

The U.S. Supreme Court has long held, consistent with the Fifth Amendment, that the government cannot compel a person to respond to a question when the answer could be incriminating. Lower courts, however, have disagreed on the scope of the right to remain silent when the government demands that a person disclose or enter phone and computer passwords. This confusing patchwork of rulings has resulted in Fifth Amendment rights depending on where one lives, and in some cases, whether state or federal authorities are the ones demanding the password.

“The Constitution is clear: no one ‘shall be compelled in any criminal case to be a witness against himself,’” said EFF Senior Staff Attorney Andrew Crocker. “When law enforcement requires you to reveal your passcodes, they force you to be a witness in your own criminal prosecution. The Supreme Court should take this case to settle this critical question about digital privacy and self-incrimination.”

For the full petition:

Contact:  AndrewCrockerSenior Staff MarkRumoldSenior Staff
Rebecca Jeschke

EFF's Response to Social Media Companies' Decisions to Block President Trump’s Accounts

1 week 2 days ago

Like most people in the United States and around the world, EFF is shocked and disgusted by Wednesday’s violent attack on the U.S. Capitol. We support all those who are working to defend the Constitution and the rule of law, and we are grateful for the service of policymakers, staffers, and other workers who endured many hours of lockdown and reconvened to fulfill their constitutional duties. 

The decisions by Twitter, Facebook, Instagram, Snapchat, and others to suspend and/or block President Trump’s communications via their platforms is a simple exercise of their rights, under the First Amendment and Section 230, to curate their sites. We support those rights. Nevertheless, we are always concerned when platforms take on the role of censors, which is why we continue to call on them to apply a human rights framework to those decisions. We also note that those same platforms have chosen, for years, to privilege some speakers—particularly governmental officials—over others, not just in the U.S., but in other countries as well. A platform should not apply one set of rules to most of its users, and then apply a more permissive set of rules to politicians and world leaders who are already immensely powerful. Instead, they should be precisely as judicious about removing the content of ordinary users as they have been to date regarding heads of state. Going forward, we call once again on the platforms to be more transparent and consistent in how they apply their rules—and we call on policymakers to find ways to foster competition so that users have numerous editorial options and policies from which to choose. 

Corynne McSherry

Police Robots Are Not a Selfie Opportunity, They’re a Privacy Disaster Waiting to Happen

1 week 2 days ago

The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly. 

Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you. 

The Orwellian menace of snitch robots might not be immediately apparent. Robots are fun. They dance. You can take selfies with them. This is by design. Both police departments and the companies that sell these robots know that their greatest contributions aren’t just surveillance, but also goodwill. In one brochure Knightscope sent to University of California-Hastings, a law school in the center of San Francisco, the company advertises their robot’s activity in a Los Angeles shopping district called The Bloc. It’s unclear if the robot stopped any robberies, but it did garner over 100,000 social media impressions and 426 comments. Knightscope claims the robot’s 193 million overall media impressions was worth over $5.8 million. The Bloc held a naming contest for the robot, and said it has a “cool factor” missing from traditional beat cops and security guards.

The Bloc/Knighscope promotional material released via public records request by UC-Hastings

As of February 2020, Knighscope had around 100 robots deployed 24/7 throughout the United States. In how many of these communities did neighbors or community members get a say as to whether or not they approved of the deployment of these robots?

But in this era of long-overdue conversations about the role of policing in our society—and in which city after city is reclaiming privacy by restricting police surveillance technologies—these robots are just a more playful way to normalize the panopticon of our lives.

Police Robots Are Surveillance

Knightscope’s robots need cameras to navigate and traverse the terrain, but that’s not all their sensors are doing. According to the proposal that the police department of Huntington Park, California, sent to the mayor and city council, these robots are equipped with many infrared cameras capable of reading license plates. They also have wireless technology “capable of identifying smartphones within its range down to the MAC and IP addresses.” 

The next time you’re at a protest and are relieved to see a robot rather than a baton-wielding officer, know that that robot may be using the IP address of your phone to identify your participation. This makes protesters vulnerable to reprisal from police and thus chills future exercise of constitutional rights. "When a device emitting a Wi-Fi signal passes within a nearly 500 foot radius of a robot,” the company explains on its blog, “actionable intelligence is captured from that device including information such as: where, when, distance between the robot and device, the duration the device was in the area and how many other times it was detected on site recently."

In Spring 2019, the company also announced it was developing face recognition so that robots would be able to “detect, analyze and compare faces.” EFF has long proposed a complete ban on police use of face recognition technology. 

Who Gets Reprimanded When a Police Robot Makes a Bad Decision? 

Knightscope’s marketing materials and media reporting suggest the technology can effectively recognize “suspicious” packages, vehicles, and people. 

But when a robot is scanning a crowd for someone or something suspicious, what is it actually looking for? It’s unclear what the company means. The decision to characterize certain actions and attributes as “suspicious” has to be made by someone. If robots are designed to think people wearing hoods are suspicious, they may target youth of color. If robots are programmed to zero in on people moving quickly, they may harass a jogger, or a pedestrian on a rainy day. If the machine has purportedly been taught to identify criminals by looking at pictures of mugshots, then you have an even bigger problem. Racism in the criminal justice system has all but assured that any machine learning program taught to see “criminals” based on crime data will inevitably see people of color as suspicious

A robot’s machine learning and so-called suspicious behavior detection will lead to racial profiling and other unfounded harrassement. This begs the question: Who gets reprimanded if a robot improperly harrasses an innocent person, or calls the police on them? Does the robot? The people who train or maintain the robot? When state violence is unleashed on a person because a robot falsely flagged them as suspicious, “changing the programming” of the robot and then sending it back onto the street will be little solace for a victim hoping that it won’t happen again. And when programming errors cause harm, who will review changes to make sure they can address the real problem?"

These are all important questions to ask yourselves, and your police and elected officials, before taking a selfie with a rolling surveillance robot. 

Matthew Guariglia

Oakland Privacy and the People of Vallejo Prevail in the Fight For Surveillance Accountability

1 week 4 days ago

Just as the 2020 holiday season was beginning in earnest, Solano Superior Court Judge Bradley Nelson upheld the gift of surveillance accountability that the California State legislature had provided state residents when they passed 2015's Senate Bill 741 (Cal. Govt. Code § 53166). Judge Bradley's order brought positive closure to a battle that began last March when Electronic Frontier Alliance member Oakland Privacy notified the Vallejo City Council, and Mayor, that their police department’s proposal to acquire a Cell Site Simulator (CSS) violated California state law.

Introduced by then state-senator Jerry Hill, SB 741 requires an open and transparent process before a local government agency in California may acquire CSS technology. EFF explained this in our own letter to the Vallejo Mayor and City Council days after the illegal purchase had been approved. Specifically, the law requires an agency to write, and publish online for public review, a policy that ensures "the collection, use, maintenance, sharing, and dissemination of information gathered through the use of cellular communications interception technology complies with all applicable law and is consistent with respect for an individual's privacy and civil liberties."

Despite notice from Oakland Privacy that the proposal violated SB 741, the Vallejo City Council on March 24, 2020, authorized their police department to purchase CSS technology from KeyW Corporation. Meanwhile, the City and the nation were adapting to shelter in place protocols intended to suppress the spread of COVID-19, which limited public participation in Vallejo’s CSS proposal.

In his ruling, Solano County Superior Court Judge Bradley Nelson reasoned:

“Respondent had a duty to obey [SB 741] by passing a resolution or ordinance specifically approving a particular policy governing the use of the [CSS] device it purchased. Respondent breached that duty by simply delegating creation of that privacy policy to its police department without an opportunity for public comment on the policy before it was adopted. Because any such policy's personal purpose is to safeguard, within acceptable limitations, the privacy and civil liberties of the members of the public whose cellular communications are intercepted, public comment on any proposed policy before it is adopted also has a constitutional dimension.”

In a statement released following the judge's ruling, Oakland Privacy's research director Mike Katz-Lacabe explained the group's motivation for bringing the lawsuit: "to protect the rights of residents to learn about the surveillance equipment used by their local police and to make sure their elected officials provide meaningful oversight over equipment use.” He continued: “Senator Hill's 2015 legislation had those goals, and citizen's groups like ours are taking the next step to make sure that municipalities comply with state law..." Oakland Privacy and two Vallejo residents (Solange Echeverria, a journalist, and Dan Rubins, CEO of Legal Robot) filed the suit on May 21, 2020, requesting the judicial mandate for a public process per state law.

The City of Vallejo initially contested the lawsuit, but after a tentative ruling at the end of September in favor of Oakland Privacy, the City brought the policy back for a public hearing on October 27. On November 17, the policy returned for a second public hearing to address objections to the policy from Oakland Privacy, the ACLU of Northern California, and EFF. Among the changes were prohibitions against surveilling First Amendment-related activities and sharing data with federal immigration authorities, enhanced public logs, and Council oversight of software or hardware upgrades.

This is a significant victory not just for Oakland Privacy and the people of Vallejo. The power to decide whether these tools are acquired and, if so, how they are utilized should not stand unilaterally with agency executives. States, counties, cities, and transit agencies from San Francisco to Cambridge have adopted laws to ensure surveillance technology can't be acquired or used before a policy is put in writing and approved by an elected body—after they've heard from the affected public. We applaud Oakland Privacy for taking a stand against law enforcement circumventing democratic control over surveillance technologies used in our communities. 

Nathan Sheard

COVID-19 and Surveillance Tech: Year in Review 2020

1 week 5 days ago

Location tracking apps. Spyware to enforce quarantine. Immunity passports. Throughout 2020, governments around the world deployed invasive surveillance technologies to contain the COVID-19 outbreak.

But heavy-handed tactics like these undercut public trust in government, precisely when trust is needed most. They also invade our privacy and chill our free speech. And all too often, surveillance technologies disparately burden people of color.

In the United States, EFF and other digital rights advocates turned back some of the worst proposals. But they’ll be back in 2021. Until the pandemic ends, we must hold the line against ill-considered surveillance technologies.

Automated contact tracing apps

Contact tracing is a common public health response to contagious disease. In its traditional form, officials interview an infected person to determine who they had contact with, and then interview those people, too. Many have sought to automate this process with new technologies. But an app will not save us.

Some proposals would be simultaneously privacy-invasive and ineffective. For example, tracking our location with GPS or cell-site location information (CSLI) would expose whether we attended a union meeting or a BLM rally. That’s why police need a warrant to seize it. But it is not sufficiently granular to show whether two people were close enough to transmit the virus: the CDC recommends six feet of social distance, but CSLI is only accurate to a half mile and GPS to 16 feet. So EFF opposes location tracking. Yet some countries are using it.

Another approach is tracking our proximity to others by measuring Bluetooth signal strength. If two people install compatible proximity apps, and come close enough together to transmit the virus, then their apps will exchange digital tokens. Later, if one becomes ill, the other can be notified.

Proximity tracking might or might not help at the margins. It will be over-inclusive: two people standing a few feet apart might be separated by a wall. It also will be under-inclusive: many people don’t have smartphones, and many more won’t use a proximity app. Moreover, no app can fill the as-yet unmet need for traditional public health measures, such as testing, contact tracing, support for patients, PPE for health workers, social distancing, and wearing a mask.

Proximity apps must be engineered for privacy. Unfortunately, many are not. In a “centralized” model, the government has access to all the proximity data and can match it to particular people. This excessively threatens digital rights.

A better approach is Google Apple Exposure Notification (GAEN). It collects only ephemeral, random identifiers that are harder to correlate to particular individuals. Also, GAEN stores these identifiers in the users’ phones, unless a user tests positive, in which case they can upload the identifiers to a publicly accessible database. Public health authorities in many U.S. states and foreign nations sponsor GAEN-compliant apps.

Participation must be voluntary. Higher education, for example, must not require students, faculty, and staff to submit to automated contact tracing. We need laws that prohibit schools, workplaces, and restaurants from discriminating against people who do not use proximity tracking.

Surveillance to enforce quarantine

Some countries have used surveillance technologies to enforce home quarantine. These include compulsion to wear GPS-linked shackles, to download government spyware into personal phones, and to send the government selfies with time and place stamps.

EFF opposes such tactics. Compelled spyware unduly invades the right of individuals to autonomously control their smartphones. GPS shackles invade location privacy, cause pain, and trigger false alarms. Home selfies expose sensitive information, including grooming in private, presence of other people, and expressive effects such as books and posters.

Fortunately, governments in the United States largely have not used these tactics. The exception is a small number of cases involving people who tested positive and then allegedly broke stay-at-home instructions.

Immunity passports

Some have proposed “immunity passports” to screen people for entry to public places. The premise is that a person is not fit to enter a school, workplace, or restaurant until they can prove they have tested negative for infection or supposedly obtained immunity through past infection. Such systems may require a person to use their phone to display a digital credential at a doorway.

EFF opposes such systems. They would aggravate existing social inequities in access to smart phones, medical tests, and health treatment. Moreover, the display or transmission of credentials at doorways would create new infosec vulnerabilities. These systems also would be a significant step towards national digital identification that can be used to collect and store our personal information and track our movements. And inevitable system errors would needlessly block people from going to school or work.

Further, such systems would not advance public health. Tests of infectiousness have high rates of false negatives, and do not account for new infection after testing. Likewise, it remains unclear how much protection a past infection provides against a future infection.

Fortunately, California’s governor this fall vetoed a bill (A.B. 2004) that would have laid the groundwork for immunity passports. Specifically, it would have created a blockchain-based system of “verifiable health credentials” to report COVID-19 and other medical test results. EFF opposed it.

Processing our COVID-related data

While some of the worst ideas did not gain traction in 2020, the news is not all good. Governments and corporations are processing all manner of our COVID-related data, and existing laws do not adequately secure it.

States are conducting manual contact tracing, often contracting with business to build new data management systems. States also are partnering with businesses to create websites where we provide our health and other information to obtain screening for COVID-19 testing and treatment. Just as the U.S. Department of Health and Human Services expanded its processing of data about people who took COVID-19 tests, the federal government announced plans to share COVID-related data with its own corporate contractors, including TeleTracking Technologies and Palantir.

Businesses are also expanding their surveillance of workers. This occurs at job sites, in the name of tracking infection, and in socially distant home offices, in the name of tracking productivity.

There are many ways to misuse our COVID-related data. Companies might divert our COVID data to advertising. All this COVID data might be stolen by identify thieves, stalkers, and foreign nations. In New Zealand, a restaurant employee even used COVID data to send harassing messages to a customer.

Moreover, public health officials and their corporate contractors might share our COVID-related data with police and immigration officials. This would frustrate containment of the outbreak, because many people will share less of their personal information if they fear the government will use it against them. Yet in some communities, police are conducting contact tracing or obtaining public health data about the home addresses of patients. The outgoing administration even proposed deploying the National Guard to hospitals to process COVID-related personal data.

Existing data privacy laws do not adequately secure our COVID-related data. For example, HIPAA’s protections of health data apply only to narrowly defined healthcare providers and their business associations. This is one more illustration of why we need a comprehensive federal consumer data privacy law.

In the short run, we need COVID-specific data privacy legislation. But efforts to enact it have stalled in Congress and state legislatures.

Next steps

As pandemic fatigue sets in, the temptation will grow to try something—anything—even if it is unlikely to contain the virus and highly likely to invade our digital rights. So, we probably haven’t heard the last of location tracking apps, immunity passports, and spyware for patients. Other bad ideas may gain momentum, like dragnet COVID-19 surveillance with face recognition, thermal imaging, or drones. And we still need new privacy laws to lock down all of our COVID-related personal data.

Looking to 2021, we must remain vigilant.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Adam Schwartz

EFF to FinCEN: Stop Pushing For More Financial Surveillance

1 week 5 days ago

Today, EFF submitted comments to the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) opposing the agency’s proposal for new regulations of cryptocurrency transactions. As we explain in our comments, financial records can be deeply personal and revealing, containing a trove of sensitive information about people’s personal lives, beliefs, and affiliations. Regulations regarding such records must be constructed with careful consideration regarding their effect on privacy, speech and innovation. 

Even in an increasingly digital world, people have a right to engage in private financial transactions.

FinCEN’s proposed rule is neither deliberative nor thoughtful. As we’ve written before, this rule—which would require regulated businesses to keep records of cryptocurrency transactions over $3,000 USD and to report cryptocurrency transactions over $10,000 to the government—would force cryptocurrency exchanges and other money services businesses to expand their collection of identity data far beyond what they must currently do. In fact, it wouldn’t only require these businesses to collect information about their own customers, but also the information of anyone who transacts with those customers using their own cryptocurrency wallets.  

In addition to the concerns we’ve already raised, EFF believes the proposed regulation as written would undermine the civil liberties of cryptocurrency users, give the government access to troves of sensitive financial data beyond what is contemplated by the regulation, and have unintended consequences for certain blockchain technology—such as smart contracts and decentralized exchanges—that could chill innovation. 

The agency has not provided nearly enough time to consider all of these risks properly. And, by announcing this proposal with a short comment period over the winter holiday, FinCEN’s process did not allow many members of the public and experts the necessary opportunity to provide feedback on the potentially enormous consequences of this regulation. 

That’s why EFF is urging the agency not to implement this proposal. We are instead asking that FinCEN meet directly with those affected by this regulation, including innovators, technology users, and civil liberties advocates to understand the effect it will have. And we’re calling on the agency to significantly extend the comment period to a minimum of 60 days, and offer additional time for comments after any adjustments are made to the proposed regulation. 

This Rushed Proposal Threatens Financial Privacy, Speech, and Innovation

Even in an increasingly digital world, people have a right to engage in private financial transactions. These protections are crucial. We’ve seen protestors and dissidents in Hong Kong, Belarus, and Nigeria make deliberate choices to use cash or cryptocurrencies to protect themselves against surveillance. The ability to transact anonymously allows people to engage in political activities, protected in the U.S. by the First Amendment, which may be sensitive or controversial. Anonymous transactions should be protected whether those transactions occur in the physical world with cash or online. 

The proposal would require businesses to collect far more information than is necessary to achieve the agency’s policy goals. The proposed regulation purports to require cryptocurrency transaction data to be provided to the government only when the amount of the transactions exceed a particular threshold. However, because of the nature of public blockchains, the regulation would actually result in the government gaining troves of data about cryptocurrency users far beyond what the regulation contemplates. 

Bitcoin addresses are pseudonymous, not anonymous—and the Bitcoin blockchain is a publicly viewable ledger of all transactions between these addresses. That means that if you know the name of the user associated with a particular Bitcoin address, you can glean information about all of their Bitcoin transactions that use that address. In other words, the proposed regulation would provide the government with access to a massive amount of data beyond just what the regulation purports to cover.

That scale of such collection introduces considerable risk. Databases of this size can become a honeypot of information that tempts bad actors, or those who might misuse it beyond its original intended use. Thousands of FinCEN’s own files have already been exposed to the public, making it clear that FinCEN’s security protocols are not adequate to prevent even large-scale leakage. This is, of course, not the first time that a sensitive government database has been leaked, mishandled, or otherwise breached. Over the past several weeks, the SolarWinds hack of U.S. government agencies has made headlines, and details are still emerging—and this is hardly the only example of a large-scale government hack. 

There are also significant Fourth Amendment concerns. As we argue in our comments:

    The proposed regulation violates the Fourth Amendment’s protections for individual privacy. Our society’s understanding of individual privacy and the legal doctrines surrounding that privacy are evolving. While 1970s-era court opinions held that consumers lose their privacy rights in the data they entrust with third parties, modern courts have become skeptical of these pre-digital decisions and have begun to draw different boundaries around our expectations of privacy. Acknowledging that our world is increasingly digital and that surveillance has become cheaper and more ubiquitous, the Supreme Court has begun to chip away at the third-party doctrine—the idea that an individual does not have a right to privacy in data shared with a third party. Some Supreme Court Justices have written that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.” In 1976, the Supreme Court pointed to the third-party doctrine in holding in U.S. v. Miller that the then-existing Bank Secrecy Act reporting requirements did not violate the Fourth Amendment. 

Two developments make continued reliance on the third-party doctrine suspect, including as the source for regulations such as those contemplated here. 

First, since the Miller decision, the government has greatly expanded the Bank Secrecy Act’s reach and its intrusiveness on individual financial privacy. Although the Supreme Court upheld the 1970s regulations in an as-applied challenge, Justice Powell, who authored Miller, was skeptical that more intrusive rules would pass constitutional muster. In California Bankers Association v. Shultz, Justice Powell wrote, “Financial transactions can reveal much about a person's activities, associations, and beliefs. At some point, governmental intrusion upon these areas would implicate legitimate expectations of privacy.” Government intrusion into financial privacy has dramatically increased since Miller and Shultz, likely intruding on society’s legitimate expectations of privacy and more directly conflicting with the Fourth Amendment.

Second, since Miller, we have seen strong pro-privacy opinions issued from the U.S. Supreme Court in multiple cases involving digital technology that reject the government’s misplaced reliance on the third-party doctrine. This includes: U.S. v. Jones (2012), in which the Court found that law enforcement use of a GPS location device to continuously track a vehicle over time was a search under the Fourth Amendment; Riley v. California (2014), in which the Court held that warrantless search and seizure of the data on a cell phone upon arrest was unconstitutional; and Carpenter v. U.S., in which the Court held that police must obtain a warrant before accessing cell site location information from a cell phone company. EFF is heartened to see these steps by the courts to better recognize that Americans do not sacrifice their privacy rights when interacting in our modern society, which is increasingly intermediated by corporations holding sensitive data. We believe this understanding of privacy can and should extend to our financial data. We urge FinCEN to heed the more nuanced understanding of privacy rights seen in modern court opinions, rather than anchoring its privacy thinking in precedents from a more analog time in America’s history. 

Finally, we urge FinCEN to consider the potential chilling effects its regulation could have on developing technologies. FinCEN should be extremely cautious about crafting regulation that could interfere with the growing ecosystem of smart contract technology, including decentralized exchanges. We are in the very earliest days of the exploration of smart contract technology and decentralized exchanges. Just as it would have been an error to see the early Internet as merely an extension of the existing postal service, it is important not to view the risks and opportunities of these new technologies solely through the lens of financial services. The proposed regulation would not only chill experimentation in a field that could have many potential benefits for consumers, but would also prevent American users and companies from participating when those systems are deployed in other jurisdictions.

Because of the proposed regulation’s potential impact on the civil liberties interests of technology users and potential chilling effect on innovation across a broad range of technology sectors, we urge FinCEN not to implement this proposal as it stands. Instead, we ask that it does its due diligence to ensure that civil liberties experts, innovators, technology users, and the public have an opportunity to voice their concerns about the potential impact of the proposal.

Read EFF’s full comments

Related Cases: Riley v. California and United States v. WurieCarpenter v. United States
Hayley Tsukayama

EFF Statement on British Court’s Rejection of Trump Administration’s Extradition Request for Wikileaks’ Julian Assange

1 week 6 days ago

Today, a British judge denied the Trump Administration’s extradition request for Wikileaks Editor Julian Assange, who is facing charges in the United States under the Espionage Act and the Computer Fraud and Abuse Act. The judge largely confirmed the charges against him, but ultimately determined that the United States’ extreme procedures for confinement that would be applied to Mr. Assange would create a serious risk of suicide.

EFF’s Executive Director Cindy Cohn said in a statement today:

“We are relieved that District Judge Vanessa Baraitser made the right decision to reject extradition of Mr. Assange and, despite the U.S. government’ initial statement, we hope that the U.S. does not appeal that decision. The UK court decision means that Assange will not face charges in the United States, which could have set dangerous precedent in two ways. First, it could call into question many of the journalistic practices that writers at the New York Times, the Washington Post, Fox News, and other publications engage in every day to ensure that the American people stay informed about the operations of their government. Investigative journalism—including seeking, analyzing and publishing leaked government documents, especially those revealing abuses—has a vital role in holding the U.S. government to account. It is, and must remain, strongly protected by the First Amendment. Second, the prosecution, and the judge’s decision, embraces a theory of computer crime that is overly broad -- essentially criminalizing a journalist for discussing and offering help with basic computer activities like use of rainbow tables and scripts based on wget, that are regularly used in computer security and elsewhere.

While we applaud this decision, it does not erase the many years Assange has been dogged by prosecution, detainment, and intimidation for his journalistic work. It also does not erase the government’s arguments that, as in so many other cases, attempts to cast a criminal pall over routine actions because they were done with a computer. We are still reviewing the judge’s opinion and expect to have additional thoughts once we’ve completed our analysis.”

Read the judge’s full statement.

Related Cases: Bank Julius Baer & Co v. Wikileaks
rainey Reitman

Video Hearing Tuesday: ACLU, EFF Urge Court to Require Warrants for Border Searches of Digital Devices

1 week 6 days ago
Appeals Court Should Uphold Fourth Amendment Rights for International Travelers

Boston – The American Civil Liberties Union (ACLU), the Electronic Frontier Foundation (EFF), and the ACLU of Massachusetts will urge an appeals court on Tuesday to require warrants for the government to search electronic devices at U.S. airports and other ports of entry—ensuring that the Fourth Amendment protects travelers as they enter the country. The hearing is at 9:30 a.m. ET/6:30 a.m. PT on January 5, and is available to watch by livestream.

In 2017, ten U.S citizens and one lawful permanent resident who regularly travel outside of the country with cell phones, laptops, and other electronic devices sued the Department of Homeland Security for illegal searches of their devices when they reentered the country. The suit, Alasaad v. Wolf, challenged the government’s practice of searching travelers’ electronic equipment without a warrant and usually without any suspicion that the traveler is guilty of wrongdoing.

In a historic win for digital privacy, a federal district court judge ruled in Alasaad that suspicionless electronic device searches at U.S. ports of entry violate the Fourth Amendment. The court required that border agents have reasonable suspicion that a device contains digital contraband before searching or seizing it. At Tuesday’s hearing at the U.S. Court of Appeals for the First Circuit, ACLU attorney Esha Bhandari will argue that the Constitution requires a warrant based on probable cause to search our electronic devices at the border —just as is required everywhere else in the United States.

Hearing in Alasaad v. Wolf

Tuesday, January 5
9:30 a.m. ET/6:30 a.m PT


Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights
Rebecca Jeschke

A Smorgasbord of Bad Takedowns: 2020 Year in Review

2 weeks ago

Here at EFF, we take particular notice of the way that intellectual property law leads to expression being removed from the Internet. We document the worst examples in our Takedown Hall of Shame. Some, we use to explain more complex ideas. And in other cases, we offer our help.

In terms of takedowns, 2020 prefaced the year to come with a January story from New York University School of Law. The law school posted a video of a panel titled “Proving Similarity,” where experts explained how song similarity is analyzed in copyright cases. Unsurprisingly, that involved playing parts of songs during the panel. And so, the video meant to explain how copyright infringement is determined was flagged by Content ID, YouTube’s automated copyright filter.

While the legal experts at, let’s check our notes, NYU Law were confident this was fair use, they were less confident that they understood how YouTube’s private appeals system worked. And, more specifically, whether challenging Content ID would lead to NYU losing its YouTube channel. They reached out privately to ask questions about the system, but got no answers. Instead, YouTube just quietly restored the video.

And with that, a year of takedowns was off. There was Dr. Drew Pinsky’s incorrect assessment that copyright law let him remove a video showing him downplaying COVID-19. A self-described Twitter troll using the DMCA to remove from Twitter an interview he did about his tactics and then using the DMCA to remove a photo of his previous takedown. And, when San Diego Comic Con went virtual, CBS ended up taking down its own Star Trek panel.

On our end, we helped Internet users push back on attempts to use IP claims as a tool to silence critics. In one case, EFF helped a Redditor win a fight to stay anonymous when Watchtower Bible and Tract Society, a group that publishes doctrines for Jehovah’s Witnesses, tried to learn their identity using copyright infringement allegations.

We also called out some truly ridiculous copyright takedowns. One culprit, the ironically named No Evil Foods, went after journalists and podcasters who reported on accusations of union-busting, claiming copyright in a union organizer’s recordings of anti-union presentations by management. We sent a letter telling them to knock it off: if the recorded speeches were even copyrightable, which is doubtful, this was an obvious fair use, and they were setting themselves up for a lawsuit under DMCA section 512(f), the provision that provides penalties for bad-faith takedowns. The takedowns stopped after that.

Another case saw a university jumping on the DMCA abuse train. Nebraska’s Doane University used a DMCA notice to take down a faculty-built website created to protest deep academic program cuts, claiming copyright in a photo of the university. One problem: that photo was actually taken by an opponent of the cuts, specifically for the website. The professor who made the website submitted a counternotice, but the university’s board was scheduled to vote on the cuts before the DMCA’s putback waiting period would expire. EFF stepped in and demanded that Doane withdraw its claim, and it worked—the website was back up before the board vote.

Copyright takedowns aren’t the only legal tool we see weaponized against online speech—brands are just as happy to use trademarks this way. Sometimes that can take the form of a DMCA-like takedown request, like the NFL used to shut down sales of “Same Old Jets” parody merchandise for long-suffering New York Jets fans. In other cases, a company might use a tool called the Uniform Domain-Name Dispute-Resolution Policy (UDRP) to take over an entire website. The UDRP lets a trademark holder take control of a domain name if it can convince a private arbitrator that Internet users would think it belonged to the brand and that the website owner registered the name in “bad faith,” without a legitimate interest in using it.

This year, we helped the owner of stand up to a UDRP action and hold on to her domain name. Daryl Bentillo was frustrated by her experience as an Instacart shopper and registered that domain name intending to build a site that would help organize shoppers to advocate for better pay practices. But before she even had a chance to get started, Ms. Bentillo got an email saying that Instacart was trying to take her domain name away using this process she’d never heard of. That didn’t sit right with us, so we offered our help. We talked to Instacart’s attorneys about how Ms. Bentillo had every right to use the company’s name this way to refer to it (called a nominative fair use in trademark-speak)—and about how it sure looked like they were just using the UDRP process to shut down organizing efforts. Instacart was ultimately persuaded to withdraw its complaint.

Back in copyright land, we also dissected the problem of the RIAA’s takedown of youtube-dl, a popular tool for downloading videos from Internet platforms. Youtube-dl didn’t infringe on RIAA’s copyright, but the RIAA made the takedown claiming that because DMCA 1201 says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work and that youtube-dl could be used to download RIAA-member music, it should be removed.

RIAA and other copyright holders have argued that it’s a violation of DMCA 1201 to bypass DRM even if you’re doing it for completely lawful purposes; for example, if you’re downloading a video on YouTube for the purpose of using it in a way that’s protected by fair use.

Trying to use the notice-and-takedown process against a tool that does not infringe on any music label’s copyright and has lawful uses was an egregious abuse of the system, and we said so.

And to bring us full circle: we end with a case where discussing copyright infringement brought a takedown. Lindsay Ellis, a video creator, author, and critic, created a video called “Into the Omegaverse: How a Fanfic Trope Landed in Federal Court,” dissecting a story where one author, Addison Cain, has sent numerous takedowns to platforms with dubious copyright claims. Eventually, one of the targets sued and the question of who owns what in a genre that developed entirely online ended up in court. It did not take long for Cain to send a series of takedowns against this video about her history of takedowns.

That’s when EFF stepped in. The video is a classic fair use. It uses a relatively small amount of a copyrighted work for purposes of criticism and parody in an hour-long video that consists overwhelmingly of Ellis’ original content. In short, the copyright claims (and the other, non-copyright claims) were deficient. We were happy to explain this to Cain and her lawyer.

It's been an interesting year for takedowns. Some of these takedowns involved automated filters, a problem we dived deep into with our whitepaper Unfiltered: How YouTube’s Content ID
Discourages Fair Use and Dictates What We See Online. Filters like Content ID not only remove lots of lawful expression; they also sharply restrict what we do see. Remember: if you encounter problems with bogus legal threats, DMCA takedowns, or filters, you can contact EFF at

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Cara Gagliano

Banning Government Use of Face Recognition Technology: 2020 Year in Review

2 weeks ago

If there was any question about the gravity of problems with police use of face surveillance technology, 2020 wasted no time in proving them dangerously real. Thankfully, from Oregon to Massachusetts, local lawmakers responded by banning their local governments' use.

The Alarm 

On January 9, after first calling and threatening to arrest him at work, Detroit police officers traveled to nearby Farmington Hills to arrest Robert Williams in front of his wife, children, and neighbors—for a crime he did not commit. He was erroneously connected by face recognition technology that matched an image of Mr. Williams with video from a December 2018 shoplifting incident. Later this year, Detroit police erroneously arrested a second man because of another misidentification by face recognition technology.

For Robert Williams, his family, and millions of Black and brown people throughout the country, the research left the realm of the theoretical and became all too real. Experts at MIT Media Lab, the National Institute of Standards and Technology, and Georgetown's Center on Privacy and Technology have shown that face recognition technology is riddled with error, especially for people of color. It is one more of a long line of police tools and practices that exacerbate historical bias in the criminal system.

The Response 

2020 will undoubtedly come to be known as the year of the pandemic. It will also be remembered for unprecedented Black-led protest against police violence and concerns that surveillance of political activity will chill our First Amendment rights. Four cities joined the still-growing list of communities that have stood up for their residents' rights by banning local government use of face recognition. Just days after Mr. Williams' arrest, Cambridge, MA—an East Coast research and technology hub–became the largest East Coast City to ban government use of face recognition technology. It turned out to be a distinction they wouldn't retain long.

In February and March, Chicago and New York City residents and organizers called on local lawmakers to pass their own bans. However, few could have predicted that a month later, organizing, civic engagement, and life as we knew it would change dramatically. As states and municipalities began implementing stay in place orders to suppress an escalating global pandemic, City Councils and other lawmaking bodies adapted to social distancing and remote meetings.

As those of us privileged enough to work from home adjusted to Zoom meetings, protests in the name of Breonna Taylor and George Floyd spread throughout the country.

Calls to end police use of face recognition technology were joined by calls for greater transparency and accountability. Those calls have not yet been answered with a local ban on face recognition in New York City.

As New Yorkers continue to push for a ban, one enacted bill will shine the light on NYPD use of all manner of surveillance technology. That light of transparency will inform lawmakers and the public of the breadth and dangers of NYPD's use of face recognition and other privacy-invasive technology. After three years of resistance from the police department and the mayor, New York's City Council passed the POST Act with a veto-proof majority. While lacking the community control measures in stronger surveillance equipment ordinances, the POST Act requires the NYPD to publish surveillance impact and use policies for each of its surveillance technologies. This will end decades of the department's refusal to disclose information and policies about its surveillance arsenal.


End Face Surveillance in your community

Building on the momentum of change driven by political unrest and protest–and through the tireless work of local organizers including the ACLU-Massachusetts–just days after New York's City Council passed the POST Act, Boston's City Council took strong action. It voted unanimously to join neighboring Cambridge in protecting their respective residents from police use of face recognition. In the preceding weeks, EFF advocated for, and council members accepted, improvements to the ordinance. One closed a loophole that might have allowed police to ask third parties to collect face recognition evidence for them. Another change provides attorney fees to a person who brings a successful suit against the City for violating the ban.

Not to be outdone by their peers in California and Massachusetts, 2020 was also the year municipal lawmakers in Oregon and Maine banned their own agencies from using the technology. In Portland, Maine, the City Council voted unanimously to ban the technology in August. Then in November, the City's voters passed the first ballot measure prohibiting government use of face recognition.

Across the country, the Portland, Oregon, City Council voted unanimously in September to pass their government ban (as well as a ban on private use of face recognition in places of public accommodation). In the days leading up to the vote, a coalition organized by PDX Privacy, an Electronic Frontier Alliance member, presented local lawmakers with a petition signed by over 150 local business owners, technologists, workers, and residents for an end to government use of face surveillance.


End Face Surveillance in your community

Complimenting the work of local lawmakers, federal lawmakers are stepping forward. Senators Jeff Merkley and Jeff Markey), and Representatives Ayanna Pressley, Pramila Jayapal, Rashida Tlaib, and Yvette Clarke introduced the Facial Recognition and Biometric Technology Moratorium Act of 2020 (S.4084/H.R.7356). If passed, it would ban federal agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Patrol from using face recognition to track and identify (and misidentify) millions of U.S. residents and travelers. The act would also withhold certain federal funding from local and state governments that use face recognition.

What's next? 

While some high-profile vendors this year committed to pressing pause on the sale of face recognition technology to law enforcement, 2020 was also a year where the public became much more familiar with how predatory the industry can be. Thus, through our About Face campaign and work of local allies, EFF will continue to support the movement to ban all government use of face recognition technology.

With a new class of recently elected lawmakers poised to take office in the coming weeks, now is the time to reach out to your local city council, board of supervisors, and state and federal representatives. Tell them to stand with you in ending government use of face recognition, a dangerous technology with a proven ability to chill essential freedoms and amplify systemic bias. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Nathan Sheard

DNS, DoH, and ODoH, Oh My: Year-in-Review 2020

2 weeks 1 day ago

Government knowledge of what sites activists have visited can put them at risk of serious injury, arrest, or even death. This makes it a vitally important priority to secure DNS. DNS over HTTPS (DoH) is a protocol that encrypts the Domain Name System (DNS) by performing lookups over the secure HTTPS protocol. DNS translates human-readable domain names (such as into machine-routable IP addresses (such as, but it has traditionally done this via cleartext queries over UDP port 53 (Do53). This allows anyone who can snoop on your connection—whether it’s your government, your ISP, or the hacker next to you on the same coffee shop WiFi—to see what domain you’re accessing and when.

In 2019, the effort to secure DNS through DoH made tremendous progress both in terms of the deployment of DoH infrastructure and in the Internet Engineering Task Force (IETF), an Internet governance body tasked with standardizing the protocols we all rely on. This progress was made despite large pushback by the Internet Service Providers’ Association in the UK, citing difficulties DoH would present to British ISPs, which are mandated by law to filter adult content.

2020 has also seen great strides in the deployment of DNS over HTTPS (DoH). In February, Firefox began the rollout of DoH to its users in the US, using Cloudflare’s DoH infrastructure to provide lookups by default. Google’s Chrome browser followed suit in May by switching users to DoH if their DNS provider supports it. Meanwhile, the list of publicly available DoH resolvers has expanded to the dozens, many of which implement strong privacy policies, such as not keeping connection logs.

This year’s expansion of DoH deployments has alleviated some of the problems critics have cited, such as the centralization of DoH infrastructure. Previously, only a few large Internet technology companies like Cloudflare and Google had deployed DoH servers at scale. This facilitated these companies’ access to large troves of DNS query data, which could theoretically be exploited to mine sensitive data on DoH users. Mozilla has sought to protect their Firefox users from this danger by requiring the browser’s DoH resolvers to observe strict privacy practices, outlined in their Trusted Recursive Resolver (TRR) policy document. Comcast joined Mozilla’s TRR partners Cloudflare and NextDNS in June.

In addition to policy and deployment strategies to alleviate the privacy concerns of DoH infrastructure centralization, a group of University of Washington academics and Cloudflare technologists published a paper late last month proposing a new protocol called Oblivious DNS over HTTPS (ODoH). The protocol introduces a proxy node to the DoH network layout. Instead of directly requesting records via DoH, a client creates a request for the DNS record, along with a symmetric key of their choice. The client then encrypts the request and symmetric key to the public key of the DoH server they wish to act as a resolver. The client sends this request to the proxy, along with the identity of the DoH resolver they wish to use. The proxy removes all identifying pieces of information from the request, such as the requester's IP address, and forwards the request to the resolver. The resolver decrypts the request and symmetric key, recursively resolves the request, encrypts the response to the symmetric key provided, and sends it back to the ODoH proxy. The proxy forwards the encrypted response to the client, which is then able to decrypt it using the symmetric key it has retained in memory, and retrieve the DNS response. At no point does the proxy see the unencrypted request, nor does the resolver ever see the identity of the client.

ODoH guarantees that, in the absence of collusion between the proxy and the resolver, no one entity is able to determine both the identity of the requester and the content of the request. This is important because if powerful entities (whether it be your government, ISP, or even DNS resolver) know which people accessed what domain (and when), it gives that entity enormous power over those people. ODoH gives users a technological way to ensure that their domain lookups are secure and private so long as they trust that the proxy and the resolver do not join forces. This is a much lower level of trust than trusting that a single entity does not misuse the DNS queries you send them.

Looking ahead, one possibility worries us: using ODoH gives software developers an easy way to comply with the demands of a censorship regime in order to distribute their software without telling the regime the identity of users they’re censoring. If a software developer wished to gain distribution rights in Saudi Arabia or China, for example, they could choose a reputable ODoH proxy to connect to a resolver that refuses to resolve censored domains. A version of their software would be allowed for distribution in these countries, so long as it had a censorious resolver baked in. This would remove any potential culpability that software developers have for revealing the identity of a user to a government that can put them in danger, but it also facilitates the act of censorship. In traditional DoH, this is not possible. Giving developers an easy-out by facilitating “anonymous” censorship is a worrying prospect.

Nevertheless, the expansion of DoH infrastructure and conceptualization of ODoH is a net win for the Internet. Going into 2021, these developments give us hope for a future where our domain lookups will universally be both secure and private. It’s about time.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Bill Budington
41 minutes 17 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed