Forced Arbitration Thwarts Legal Challenge to AT&T’s Disclosure of Customer Location Data

1 month 4 weeks ago

Location data generated from our cell phones paint an incredibly detailed picture of our movements and private lives. Despite the sensitive nature of this data and a federal law prohibiting cellphone carriers from disclosing it, repeated unauthorized disclosures over the last several years show that carriers will sell this sensitive information to almost any willing buyer.

With cellphone carriers brazenly violating their customers’ privacy and the Federal Communication Commission moving slowly to investigate, it fell to consumers to protect themselves. That’s why in June 2019 EFF filed a lawsuit representing customers challenging AT&T’s unlawful disclosure of their location data. Our co-counsel are lawyers at Hagens Berman Sobol Shapiro LLP. The case, Scott v. AT&T, alleged that AT&T had violated a federal privacy law protecting cellphone customers’ location data, among other protections.

How AT&T Compelled Arbitration

That legal challenge, however, quickly ran into an all-too-familiar roadblock: the arbitration agreements AT&T forces on its customers to sign every time they buy a cellphone or new service from the company. AT&T claimed that this clause prevented the Scott case from proceeding.

The court ended up dismissing the plaintiffs’ lawsuit earlier this year. The way it did so demonstrates why Congress needs to change federal law so that the public can meaningfully protect themselves from companies’ abusive practices.

In response to the lawsuit, AT&T first moved to compel the plaintiffs to arbitration, arguing that because they had signed arbitration agreements—buried deep within an ocean of contract terms—they had no right to sue.

But under California law, AT&T cannot enforce contracts, like those at issue here, which prevent people from seeking court orders, called “public injunctions,” to prevent future harm to the public. California law also recognizes that these one-sided “contracts of adhesion” can sometimes be so unfair that they cannot be enforced. We argued that both of these principles voided AT&T’s contracts, emphasizing that our clients sought to prevent AT&T from disclosing all customers’ location data without their notice and consent to protect the broader public’s privacy and to prevent AT&T from publicly misrepresenting its practices moving forward.

AT&T responded by moving to dismiss the public injunction claims, asserting that because the company stopped disclosing customer location data to certain third parties identified in media reports, plaintiffs had no legal basis–known as standing–to seek a public injunction. AT&T’s strategy was clear: rather than admit that it had done anything wrong in the past, the company argued that because it had stopped disclosing customer location data, there was no future public harm that the court needed to prohibit via a public injunction. Because no public injunction was necessary, AT&T argued, California’s rule against the arbitration agreements did not apply and the plaintiffs remained subject to them.

We did not trust AT&T’s representations that it stopped disclosing customer location data, particularly because the company had previously promised to stop disclosing the same data, only for media reports to later show that the disclosures were ongoing. Additionally, AT&T was not clear about whether it had other programs or services that disclosed the same location data without customers’ knowledge and consent.

The plaintiffs spent months trying to learn more about AT&T’s location data sharing practices in the face of AT&T’s stonewalling. What we found was concerning: AT&T continued to disclose customer location data, including to enable commercial call routing by third-party services, without customers’ notice and consent. We asked the court to let the case proceed, arguing that this information undercut AT&T’s claims that it had stopped its harmful practices.

The court sided with AT&T. It ruled that the evidence did not establish that there was an ongoing risk that AT&T would disclose customer location data in the future and thus plaintiffs lacked standing to seek a public injunction. Next, the court upheld the legality of AT&T’s one-sided contracts and ruled that plaintiffs could be forced into arbitration.

We disagree with the court’s ruling in multiple respects. The court largely ignored evidence in the record showing that AT&T continues to disclose customer location data, putting all of its customers’ privacy at risk. It also mischaracterized plaintiffs’ allegations, allowing the court to avoid having to wrestle with AT&T’s ongoing privacy failures. Finally, the court failed to protect consumers subject to AT&T’s one-sided arbitration agreements—these contracts are fundamentally unfair and their continued enforcement is unjust.

Importantly, the court did not rule on the merits: it did not decide whether AT&T’s disclosure of customer location data was lawful. Instead, it sidestepped that question by deciding that the plaintiffs’ case didn’t belong in federal court.

Next Steps: Legislative Reform of Arbitration Agreements

The court’s decision to enforce AT&T’s arbitration agreement is problematic because it prevents consumers from vindicating their rights under a longstanding federal privacy law written to protect them. Unlike other areas of consumer privacy where comprehensive federal legislation is sorely needed, Congress has already prohibited phone services like AT&T from disclosing customer location data without notice and consent.

The legislative problem this case highlights is different: rather than writing a new law, Congress needs to amend an existing one—the Federal Arbitration Act. Arbitration was originally intended to allow large, sophisticated entities like corporations to avoid expensive legal fights. Today, however, it is used to prevent consumers, employees, and anyone with less bargaining power from having any meaningful redress in court. Congress can easily fix this injustice by prohibiting forced arbitration in one-sided contracts of adhesion, and it’s past time that they did so.

Likewise, when Congress enacts a comprehensive consumer data privacy law, it must bar enforcement of arbitration agreements that unfairly limit user enforcement of their legal rights in court. The better proposed bills do so.

Despite the federal court’s dismissal of the case against AT&T, we remain hopeful that the FCC will take action against the company for its disclosure of location data. The agency began an enforcement proceeding last year, and we hope that once President Biden appoints new FCC leadership, the agency will move quickly to hold AT&T accountable.

Related Cases: Geolocation Privacy
Aaron Mackey

California: Demand Broadband for All

1 month 4 weeks ago

From the pandemic to the Frontier bankruptcy to the ongoing failures in remote learning, we’ve seen now more than ever how current broadband infrastructure fails to meet the needs of the people. This pain is particularly felt in already under-served communitiesurban and rural—where poverty and lack of choice leaves millions at the mercy of monopolistic Internet Service Providers (ISPs) who have functionally abandoned them.

 Take Action

Tell your Senators to Support S.B. 4

This is why EFF is part of a coalition of nonprofits, private-sector companies, and local governments in support of S.B. 4. Authored by California State Senator Lena Gonzalez, the bill would promote construction of the 21st century infrastructure necessary to finally put a dent in, and eventually close, the digital divide in California.

S.B. 4 passed out of the California Senate Energy, Utilities, and Communications Committee by a vote of 11-1 on April 12. This demonstrates that lawmakers who understand these issues recognize it is vital for communities who are suffering at the hands of ISP monopolies to have greater opportunities to get the Internet access they need.

If the monopolistic ISPs didn’t come to deliver adequate service during a time when many Californians' entire lives depended on the quality of their broadband, they aren’t coming now. It is high time local communities are allowed to take the future into their own hands and build out what they need. S.B. 4 is California’s path to doing so.



Chao Liu

Why EFF Supports Repeal of Qualified Immunity

2 months ago

Our digital rights are only as strong as our power to enforce them. But when we sue government officials for violating our digital rights, they often get away with it because of a dangerous legal doctrine called “qualified immunity.”

Do you think you have a First Amendment right to use your cell phone to record on-duty police officers, or to use your social media account to criticize politicians? Do you think you have a Fourth Amendment right to privacy in the content of your personal emails? Courts often protect these rights. But some judges invoke qualified immunity to avoid affirmatively recognizing them, or if they do recognize them, to avoid holding government officials accountable for violating them.

Because of these evasions of judicial responsibility to enforce the Constitution, some government officials continue to invade our digital rights. The time is now for legislatures to repeal this doctrine.

What is Qualified Immunity?

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark law empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

In 1967, the U.S. Supreme Court first created a “good faith” defense against claims for damages (i.e., monetary compensation) under this law. In 1982, the Court broadened this defense, to create immunity from damages if the legal right at issue was not “clearly established” at the time the official violated it. Thus, even if a judge holds that a constitutional right exists, and finds that a government official violated this right, the official nonetheless is immune from paying damages—if that right was not “clearly established” at the time.

Qualified immunity directly harms people in two ways. First, many victims of constitutional violations are not compensated for their injury. Second, many more people suffer constitutional violations, because the doctrine removes an incentive to government officials to follow the Constitution.

The consequences are shocking. For example, though a judge held that these abusive acts violated the Constitution, the perpetrators evaded responsibility through qualified immunity when:

  • Jail officials subjected a detainee to seven months of solitary confinement because he asked to visit the commissary.
  • A police officer pointed a gun at a man’s head, though he had already been searched, was calmly seated, and was being guarded by a second officer.

It gets worse. Judges had been required to engage in a two-step qualified immunity analysis. First, they determined whether the government official violated a constitutional right—that is, whether the right in fact exists. Second, they determined whether that right was clearly established at the time of the incident in question. But in 2009, the U.S. Supreme Court held that a federal judge may skip the first step, grant an official qualified immunity, and never rule on what the law is going forward.

As a result, many judges shirk their responsibility to interpret the Constitution and protect individual rights. This creates a vicious cycle, in which legal rights are not determined, allowing government officials to continue harming the public because the law is never “clearly established.” For example, judges declined to decide whether these abuses were unconstitutional:

  • A police officer attempted to shoot a nonthreatening pet dog while it was surrounded by children, and in doing so shot a child.
  • Police tear gassed a home, rendering it uninhabitable for several months, after a resident consented to police entry to arrest her ex-boyfriend.

In the words of one frustrated judge:

The inexorable result is “constitutional stagnation”—fewer courts establishing law at all, much less clearly doing so. Section 1983 meets Catch-22. Plaintiffs must produce precedent even as fewer courts are producing precedent. Important constitutional questions go unanswered precisely because no one’s answered them before. Courts then rely on that judicial silence to conclude there’s no equivalent case on the books. No precedent = no clearly established law = no liability. An Escherian Stairwell. Heads government wins, tails plaintiff loses.

Qualified Immunity Harms Digital Rights

Over and over, qualified immunity has undermined judicial protection of digital rights. This is not surprising. Many police departments and other government agencies use high-tech devices in ways that invade our privacy or censor our speech. Likewise, when members of the public use novel technologies in ways government officials dislike, they often retaliate. Precisely because these abuses concern cutting-edge tools, there might not be clearly established law. This invites qualified immunity defenses against claims of digital rights violations.

Consider the First Amendment right to use our cell phones to record on-duty police officers. Federal appellate courts in the First, Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right. (EFF has advocated for this right in many amicus briefs.)

Yet last month, in a case called Frasier v. Evans, the Tenth Circuit held that this digital right was not clearly established. Frasier had used his tablet to record Denver police officers punching a suspect in the face as his head bounced off the pavement. Officers then retaliated against Frasier by detaining him, searching his tablet, and attempting to delete the video. The court granted the officers qualified immunity, rejecting Frasier’s claim that the officers violated the First Amendment.

Even worse, the Tenth Circuit refused to rule on whether, going forward, the First Amendment protects the right to record on-duty police officers. The court wrote: “we see no reason to risk the possibility of glibly announcing new constitutional rights … that will have no effect whatsoever on the case.” But a key function of judicial precedent is to protect the public from further governmental abuses. Thus, when the Third Circuit reached this issue in 2017, while it erroneously held that this right was not clearly established, it properly recognized this right going forward.

Qualified immunity has harmed other EFF advocacy for digital rights. To cite just two examples:

  • In Rehberg v. Palk, we represented a whistleblower subjected to a bogus subpoena for his personal emails. The court erroneously held it was not clearly established that the Fourth Amendment protects email content, and declined to decide this question going forward.
  • In Hunt v. Regents, we filed an amicus brief arguing that a public university violated the First Amendment by disciplining a student for their political speech on social media. The court erroneously held that the student’s rights were not clearly established, and declined to decide the issue going forward.
The Movement to Repeal Qualified Immunity

A growing chorus of diverse stakeholders, ranging from the Cato Institute and the Institute of Justice to the ACLU, is demanding legislation to repeal this destructive legal doctrine. A recent “cross-ideological” amicus brief brought together the NAACP and the Alliance Defending Freedom. Activists against police violence also demand repeal.

This movement is buoyed by legal scholars who show the doctrine has no support in the 1871 law’s text and history. Likewise, judges required to follow the doctrine have forcefully condemned it.

Congress is beginning to heed the call. Last month, the U.S. House of Representatives passed the George Floyd Justice in Policing Act (H.R. 1280), which would repeal qualified immunity as to police. Even better, the Ending Qualified Immunity Act (S. 492) would repeal it as to all government officials. It was originally introduced by Rep. Ayanna Pressley (D-Mass.) and Rep. Justin Amash (L-Mich.).

States and cities are doing their part, too. Colorado, New Mexico, and New York City recently enacted laws to allow lawsuits against police misconduct, with no qualified immunity defense. A similar bill is pending in Illinois.

Next Steps

EFF supports legislation to repeal qualified immunity—a necessary measure to ensure that when government officials violate our digital rights, we can turn to the courts for justice. We urge you to do the same.

Adam Schwartz

After Cookies, Ad Tech Wants to Use Your Email to Track You Everywhere

2 months ago

Cookies are dying, and the tracking industry is scrambling to replace them. Google has proposed Federated Learning of Cohorts (FLoC), TURTLEDOVE, and other bird-themed tech that would have browsers do some of the behavioral profiling that third-party trackers do today. But a coalition of independent surveillance advertisers has a different plan. Instead of stuffing more tracking tech into the browser (which they don’t control), they’d like to use more stable identifiers, like email addresses, to identify and track users across their devices.

There are several proposals from ad tech providers to preserve “addressable media” (read: individualized surveillance advertising) after cookies die off. We’ll focus on just one: Unified Identifier 2.0, or UID2 for short, developed by independent ad tech company The Trade Desk. UID2 is a successor to The Trade Desk’s cookie-based “unified ID.” Much like FLoC, UID2 is not a drop-in replacement for cookies, but aims to replace some of their functionality. It won’t replicate all of the privacy problems of third-party cookies, but it will create new ones. 

There are key differences between UID2 and Google’s proposals. FLoC will not allow third-party trackers to identify specific people on its own. There are still big problems with FLoC: it continues to enable auxiliary harms of targeted ads, like discrimination, and it bolsters other methods of tracking, like fingerprinting. But FLoC’s designers intend to move towards a world with less individualized third-party tracking. FLoC is a misguided effort with some laudable goals.

In contrast, UID2 is supposed to make it easier for trackers to identify people. It doubles down on the track-profile-target business model. If UID2 succeeds, faceless ad tech companies and data brokers will still track you around the web—and they’ll have an easier time tying your web browsing to your activity on other devices. UID2’s proponents want advertisers to have access to long-term behavioral profiles that capture nearly everything you do on any Internet-connected device, and they want to make it easier for trackers to share your data with each other. Despite its designers’ ill-taken claims around “privacy” and “transparency,” UID2 is a step backward for user privacy.

How Does UID2 Work?

In a nutshell, UID2 is a series of protocols for collecting, processing, and passing around users’ personally-identifying information (“PII”). Unlike cookies or FLoC, UID2 doesn’t aim to change how browsers work; rather, its designers want to standardize how advertisers share information. The UID2 authors have published a draft technical standard on Github. Information moves through the system like this:

  1. A publisher (like a website or app) asks a user for their personally-identifying information (PII), like an email address or a phone number. 
  2. The publisher shares that PII with a UID2 “operator” (an ad tech firm).
  3. The operator hashes the PII to generate a “Unified Identifier” (the UID2). This is the number that identifies the user in the system.
  4. A centralized administrator (perhaps The Trade Desk itself) distributes encryption keys to the operator, who encrypts the UID2 to generate a “token.” The operator sends this encrypted token back to the publisher.
  5. The publisher shares the token with advertisers.
  6. Advertisers who receive the token can freely share it throughout the advertising supply chain.
  7. Any ad tech firm who is a “compliant member” of the ecosystem can receive decryption keys from the administrator. These firms can decrypt the token into a raw identifier (a UID2). 
  8. The UID2 serves as the basis for a user profile, and allows trackers to link different pieces of data about a person together. Raw UID2s can be shared with data brokers and other actors within the system to facilitate the merging of user data.

The description of the system raises several questions. For example:

  • Who will act as an “administrator” in the system? Will there be one or many, and how will this impact competition on the Internet? 
  • Who will act as an “operator?” Outside of operators, who will the “members” of the system be? What responsibilities towards user data will these actors have?
  • Who will have access to raw UID2 identifiers? The draft specification implies that publishers will only see encrypted tokens, but most advertisers and data brokers will see raw, stable identifiers.

What we do know is that a new identifier, the UID2, will be generated from your email. This UID2 will be shared among advertisers and data brokers, and it will anchor their behavioral profiles about you. And your UID2 will be the same across all your devices.

How Does UID2 Compare With Cookies?

Cookies are associated with a single browser. This makes it easy for trackers to gather browsing history. But they still need to link cookie IDs to other information—often by working with a third-party data broker—in order to connect that browsing history to activity on phones, TVs, or in the real world. 

UID2s will be connected to people, not devices. That means an advertiser who collects UID2 from a website can link it to the UID2s it collects through apps, connected TVs, and connected vehicles belonging to the same person. That’s where the “unified” part of UID2 comes in: it’s supposed to make cross-device tracking as easy as cross-site tracking used to be.

UID2 is not a drop-in replacement for cookies. One of the most dangerous features of cookies is that they allow trackers to stalk users “anonymously.” A tracker can set a cookie in your browser the first time you open a new window; it can then use that cookie to start profiling your behavior before it knows who you are. This “anonymous” profile can then be used to target ads on its own (“we don’t know who this person is, but we know how they behave”) or it can be stored and joined with personally-identifying information later on.

In contrast, the UID2 system will not be able to function without some kind of input from the user. In some ways, this is good: it means if you refuse to share your personal information on the Web, you can’t be profiled with UID2. But this will also create new incentives for sites, apps, and connected devices to ask users for their email addresses. The UID2 documents indicate that this is part of the plan: 

Addressable advertising enables publishers and developers to provide the content and services consumers have come to enjoy, whether through mobile apps, streaming TV, or web experiences. … [UID2] empowers content creators to have the value exchange conversations with consumers while giving them more control and transparency over their data.

The standard authors take for granted that “addressable advertising” (and tracking and profiling) is necessary to keep publishers in business (it’s not). They also make it clear that under the UID2 framework, publishers are expected to demand PII in exchange for content.

How UID2 will work on websites, according to the documentation.

This creates bad new incentives for publishers. Some sites already require log-ins to view content. If UID2 takes off, expect many more ad-driven websites to ask for your email before letting you in. With UID2, advertisers are signaling that publishers will need to acquire, and share, users’ PII before they can serve the most lucrative ads. 

Where Does Google Fit In?

In March, Google announced that it “will not build alternate identifiers to track individuals as they browse across the web, nor... use them in [its] products.” Google has clarified that it won’t join the UID2 coalition, and won’t support similar efforts to enable third-party web tracking. This is good news—it presumably means that advertisers won’t be able to target users with UID2 in Google’s ad products, the most popular in the world. But UID2 could succeed despite Google’s opposition.

Unified ID 2.0 is designed to work without the browser’s help. It relies on users sharing personal information, like email addresses, with the sites they visit, and then uses that information as the basis for a cross-context identifier. Even if Chrome, Firefox, Safari, and other browsers want to rein in cross-site tracking, they will have a hard time preventing websites from asking for a user’s email address.

Google’s commitment to eschew third-party identifiers doesn’t mean said identifiers are going away. And it doesn’t justify creating new targeting tech like FLoC. Google may try to present these technologies as alternatives, and force us to choose: see, FLoC doesn’t look so bad when compared with Unified ID 2.0. But this is a false dichotomy. It’s more likely that, if Google chooses to deploy FLoC, it will complement—not replace—a new generation of identifiers like UID2.

UID2 focuses on identity, while FLoC and other “privacy sandbox” proposals from Google focus on revealing trends in your behavior. UID2 will help trackers capture detailed information about your activity on the apps and websites to which you reveal your identity. FLoC will summarize how you interact with the rest of the sites on the web. Deployed together, they could be a potent surveillance cocktail: specific, cross-context identifiers connected to comprehensive behavioral labels.

What Happens Next?

UID2 is not a revolutionary technology. It’s another step in the direction that the industry has been headed for some time. Using real-world identifiers has always been more convenient for trackers than using pseudonymous cookies. Ever since the introduction of the smartphone, advertisers have wanted to link your activity on the Web to what you do on your other devices. Over the years, a cottage industry has developed among data brokers, selling web-based tracking services that link cookie IDs to mobile ad identifiers and real-world info. 

The UID2 proposal is the culmination of that trend. UID2 is more of a policy change than a technical one: the ad industry is moving away from the anonymous profiling that cookies enabled, and is planning to demand email addresses and other PII instead. 

The demise of cookies is good. But if tracking tech based on real-world identity replaces them, it will be a step backward for users in important ways. First, it will make it harder for users in dangerous situations—for whom web activity could be held against them—to access content safely. Browsing the web anonymously may become more difficult or outright impossible. UID2 and its ilk will likely make it easier for law enforcement, intelligence agencies, militaries, and private actors to buy or demand sensitive data about real people.

Second, UID2 will incentivize ad-driven websites to erect “trackerwalls,” refusing entry to users who’d prefer not to share their personal information. Though its designers tout “consent” as a guiding principle, UID2 is more likely to force users to hand over sensitive data in exchange for content. For many, this will not be a choice at all. UID2 could normalize “pay-for-privacy,” widening the gap between those who are forced to give up their privacy for first-class access to the Internet, and those who can afford not to.

Bennett Cyphers

Deceptive Checkboxes Should Not Open Our Checkbooks

2 months ago

Last week, the New York Times highlighted the Trump 2020 campaign’s use of deceptive web designs to deceive supporters into donating far more money than they had intended. The campaign’s digital donation portal hid an unassuming but unfair method for siphoning funds: a pre-checked box to “make a monthly recurring donation.” This caused weekly withdrawals from supporters’ bank accounts, with some being depleted.

The checkbox in question, from the New York Times April 3rd piece.

A pre-checked box to donate more than you intended is just one  example of a “dark pattern”—a term coined by user experience (UX) designer Harry Brignull to define tricks used in websites and apps that make you do things that you didn't mean to, such as buying a service. Unfortunately, dark patterns are widespread. Moreover, the pre-checked box is a particularly common way to subvert our right to consent to serious decisions, or to withhold our consent. This ruse dupes us into “agreeing” to be signed up for a mailing list, having our data shared with third party advertisers, or paying recurring donations. Some examples are below.

A screenshot of the November 3rd, 2020 donation form from WinRed on, which shows two pre-checked boxes: one for monthly donations, and one for an additional automatic donation of the same amount on an additional date.

The National Republican Congressional Committee, which uses the same WinRed donation flow that the Trump campaign utilizes, displays two instances of the pre-checked boxes.

A screenshot of the National Republican Congressional Committee donation site (, from the WayBack Machine’s crawl on November 3rd, 2020.

The Democratic Congressional Campaign Committee’s donation site, using ActBlue software, shows a pre-selected option for monthly donations. The placement is larger and the language is much clearer for what users should expect around monthly contributions. However, this may also require careful observation from users who intend to donate only once.

A screenshot from August 31, 2020 of a pre-selected option for monthly contributions on the Democratic Congressional Campaign Committee (

What’s Wrong with a Dark Pattern Using Pre-Selected Recurring Options? 

Pre-selected options, such as pre-checked boxes, are common and not limited to the political realm. Organizations understandably seek financial stability by asking their donors for regular, recurring contributions. However, the approach of pre-selecting a recurring contribution can deprive donors of choice and undermine their trust. At best, this stratagem manipulates a user’s emotions by suggesting they are supposed to give more than once. More maliciously, it preys on the likely chance that a user passively skimming doesn’t notice a selected option. Whereas, requiring a user to click an option to consent to contribute on a recurring basis puts the user in an active position of decision-making. Defaults matter: whether making donations monthly is set as “yes, count me in” by default or “no, donate once” by default.

So, does a pre-selected option indicate consent? A variety of laws across the globe have aimed to minimize the use of these pre-selected checkboxes, but at present, most U.S. users are protected by no such law. Unfortunately, some U.S. courts have even ruled that pre-selected boxes (or “opt-out” models) do represent express consent. By contrast, Canadian spam laws require a separate box, not pre-checked, for email opt-ins. Likewise, the European Union’s GDPR has banned the use of pre-selected checkboxes for allowing cookies on web pages. But for now, much of the world’s users are at the whims of deceptive product teams when it comes to the use of pre-selected checkboxes like these. 

Are there instances in which it’s okay to use a pre-selected option as a design element? For options that don’t carry much weight beyond what the user expects (that is, consistent with their expectations of the interaction), a pre-selected option may be appropriate. One example might be if a user clicks a link with language like “become a monthly donor,” and ends up on a page with a pre-selected monthly contribution option. It also might be appropriate to use a pre-selected option to send a confirmation email of the donation. This is very different than, for example, adding unexpected items onto a user’s cart before processing a donation that unexpectedly shows up on their credit card bill later. 

How Do We Better Protect Users and Financial Contributors?

Dark patterns are ubiquitous in websites and apps, and aren’t limited to financial contributions or email signups. We must build a new landscape for users.

UX designers, web developers, and product teams must ensure genuine user consent when designing interfaces. A few practices for avoiding dark patterns include:

  • Present opt-in, rather than opt-out flows for significant decisions, such as whether  to share data or to donate on a monthly level (e.g. no pre-selected options for recurring contributions).
  • Avoid manipulative language. Options should tell the user what the interaction will do, without editorializing (e.g. avoid “if you UNCHECK this box, we will have to tell __ you are a DEFECTOR”).
  • Provide explicit notice for how user data will be used.
  • Strive to meet web accessibility practices, such as aiming for plain, readable language (for example, avoiding the use of double-negatives).
  • Only use a pre-selected option for a choice that doesn’t obligate users to do more than they are comfortable with. For example, EFF doesn’t assume all of our donors want to become EFF members: users are given the option to uncheck the “Make me a member” box. Offering this choice allows us to add a donor to our ranks as a member, but doesn’t obligate them to anything. 

We also need policy reform. As we’ve written, we support user-empowering laws to protect against deceptive practices by companies. For example, EFF supported regulations to protect users against dark patterns, issued under the California Consumer Privacy Act. 

Shirin Mori

EFF Challenges Surreptitious Collection of DNA at Iowa Supreme Court

2 months ago

Last week, EFF, along with the ACLU and the ACLU of Iowa, filed an amicus brief in the Iowa Supreme Court challenging the surreptitious collection of DNA without a warrant. We argued this practice violates the Fourth Amendment and Article I, Section 8 of the Iowa state constitution. This is the first case to reach a state supreme court involving such a challenge after results of a genetic genealogy database search linked the defendant to a crime.

The case, State v. Burns, involves charges from a murder that occurred in 1979. The police had no leads in the case for years, even after modern technology allowed them to extract DNA from blood left at the crime scene and test it against DNA collected in government-run arrestee and offender DNA databases like CODIS

In 2018, the police began working with a company called Parabon Nanolabs, which used the forensic DNA profile to predict the physical appearance of the alleged perpetrator and to generate an image that the Cedar Rapids Police Department released to the public. That image did not produce any new leads, so the police worked with Parabon to upload the DNA profile to a consumer genetic genealogy database called GEDMatch, which we’ve written about in the past. Through GEDMatch, the police linked the crime scene DNA to three brothers, including the defendant in this case, Jerry Burns. Police then surveilled Mr. Burns until they could collect something containing his DNA. The police found a straw he used and left behind at a restaurant, extracted a profile from DNA left on the straw, matched it to DNA found at the crime scene, and arrested Mr. Burns.

The State claims that the Fourth Amendment doesn’t apply in this context because Mr. Burns abandoned his privacy interest in his DNA when he left it behind on the straw. However, we argue the Fourth Amendment creates a high bar against collecting DNA from free people, even if it’s found on items the person has voluntarily discarded. In 1978, the Supreme Court ruled that the Fourth Amendment does not protect the contents of people’s trash left for pickup because they have “abandoned” an expectation of privacy in the trash. But unlike a gum wrapper or a cigarette butt or the straw in this case, our DNA contains so much private information that the data contained in a DNA sample can never be “abandoned.” Even if police don’t need a warrant to rummage through your trash (and many states disagree on this point), Police should need a warrant to rummage through your DNA. 

A DNA sample—whether taken directly from a person or extracted from items that person leaves behind—contains a person’s entire genetic makeup. It can reveal intensely sensitive information about us, including our propensities for certain medical conditions, our ancestry, and our biological familial relationships. Some researchers have also claimed that human behaviors such as aggression and addiction can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply; to discovering the likely migration patterns of our ancestors and the identities of family members we never even knew we had.

Despite the uniquely revealing nature of DNA, we cannot avoid leaving behind the whole of our genetic code wherever we go. Humans are constantly shedding genetic material; In less time than it takes to order a coffee, most humans lose nearly enough skin cells to cover an entire football field. The only way to avoid depositing our DNA on nearly every item we touch out in the world would be to never leave one’s home. For these reasons, as we argue in our brief, we can never abandon a privacy interest in our DNA.

The Burns case also raises thorny Fourth Amendment issues related to law enforcement use of consumer genetic genealogy databases. We’ve written about these issues before, and, unfortunately, the process of searching genetic genealogy databases in criminal investigations has become quite common. Estimates are that genetic genealogy sites were used in around 200 cases just in 2018 alone. This is because more than 26 million people have uploaded their genetic data to sites like GEDmatch to try to identify biological relatives, build a family tree, and learn about their health. These sites are available to anyone and are relatively easy to use. And many sites, including GEDMatch, lack any technical restrictions that would keep the police out. As a result, law enforcement officers have been capitalizing on all this freely available data in criminal investigations across the country. And in none of the cases we’ve reviewed, including Burns, have officers ever sought a warrant or any legal process at all before searching the private database. 

Police access to this data creates immeasurable threats to our privacy. It also puts us at much greater risk of being accused of crimes we didn’t commit. For example, in 2015, a similar forensic genetic genealogy search led police to suspect an innocent man. Even without genetic genealogy searches, DNA matches may lead officers to suspect—and jail—the wrong person, as happened in a California case in 2012. That can happen because our DNA may be transferred from one location to another, possibly ending up at the scene of a crime, even if we were never there.

Even if you yourself never upload your genetic data to a genetic genealogy website, your privacy could be impacted by a distant family member’s choice to do so. Although GEDmatch’s 1.3 million users only encompass about 0.5% of the U.S. adult population, research shows that their data alone could be used to identify 60% of white Americans. And once GEDmatch’s users encompass just 2% of the U.S. population, 90% of white Americans will be identifiable. Other research has shown that adversaries may be able to compromise these databases to put many users at risk of having their genotypes revealed, either at key positions or at many sites genome-wide. 

This is why this case and others like it are so important—and why we need strong rules against police access to genetic genealogy databases. Our DNA can reveal so much about us that our genetic privacy must be protected at all costs. 

We hope the Iowa Supreme Court and other courts addressing this issue will recognize that the Fourth Amendment protects us from surreptitious collection and searches of our DNA.

Related Cases: People v. Buza
Jennifer Lynch

Am I FLoCed? A New Site to Test Google's Invasive Experiment

2 months ago

Today we’re launching Am I FLoCed, a new site that will tell you whether your Chrome browser has been turned into a guinea pig for Federated Learning of Cohorts or FLoC, Google’s latest targeted advertising experiment. If you are a subject, we will tell you how your browser is describing you to every website you visit. Am I FLoCed is one of an effort to bring to light the invasive practices of the adtech industry—Google included—with the hope we can create a better internet for all, where our privacy rights are respected regardless of how profitable they may be to tech companies.

FLoC is a terrible idea that should not be implemented. Google’s experimentation with FLoC is also deeply flawed. We hope that this site raises awareness about where the future of Chrome seems to be heading, and why it shouldn't.

FLoC takes most of your browsing history in Chrome, and analyzes it to assign you to a category or “cohort.” This identification is then sent to any website you visit that requests it, in essence telling them what kind of person Google thinks you are. For the time being, this ID changes every week, hence leaking new information about you as your browsing habits change. You can read a more detailed explanation here.

Because this ID changes, you will want to visit often to see those changes.

Why is this happening?

Users have been demanding more and more respect from big business for their online privacy, realizing that the false claim “privacy is dead” was nothing but a marketing campaign. The biggest players that stand to profit from privacy invasion are those from the behavioural targeting industry.

Some companies and organizations have listened to users’ requests and improved some of their practices, giving more security and privacy assurances to their users. But most have not. This entire industry sells its intricate knowledge about people in order to target them for advertisement, most notably Google and Facebook, but also many other data brokers with names you’ve probably never heard before.

The most common way these companies identify you is by using “cookies” to track every movement you make on the internet. This relies on a tracking company convincing as many sites as possible to install their tracking cookie. But with tracking protections being deployed via browser extensions like Privacy Badger, or in browsers like Firefox and Safari, this has become more difficult. Moreover, stronger privacy laws are coming. Thus, many in the adtech industry have realized that the end is near for third-party tracking cookies.

While some cling to the old ways, others are trying to find new ways to keep tracking users, monetizing their personal information, without third-party cookies. These companies will use the word “privacy” in their marketing, and try to convince users, policy makers, and regulators that their solutions are better for users and the market. Or they will claim the other solutions are worse, creating a false impression that users have to choose between “bad” and ”worse.”

But our digital future should not be one where an industry keeps profiting from privacy violations, but one where our rights are respected.

The Google Proposal

Google announced the launch of its FLoC test with a recent blogpost. It contains lots of mental gymnastics to twist this terrible idea into the semblance of a privacy-friendly endeavour.

Perhaps most disturbing is the notion that FLoC’s cohorts are not based on who you are as an individual. The reality is FLoC uses your detailed and unique browsing history to assign you to a cohort. The number of people in a cohort is tailored to still be useful to advertisers, and according to some of Google’s own research it is 95% effective, meaning cohorts are a marginal improvement over cookies on privacy.

FLoC might not share your detailed browsing history. But we reject the notion of “because it’s in your device it’s private.” If data is used to infer something about you, about who you are, and how you can be targeted, and then shared with other sites and advertisers, then it’s not private at all.

And let's not forget that Google Sync already shares your detailed Chrome browsing history with Google when enabled by default.

The sole intent of FLoC is to keep the status quo of surveillance capitalism, with a vague appearance of user choice. It cements even more the dependability on “Google’s benevolence” and access to the internet. A misguided belief that Google is our friendly corporate overlord, that they know better, and that we should sign out our rights in exchange for crumbs for the internet to survive.

Google has also made unsubstantiated statements like “FLoC allows you to remain anonymous as you browse across websites and also improves privacy by allowing publishers to present relevant ads to large groups (called cohorts),” but as far as we can tell, FLoC does not make you anonymous in any way. Only a few browsers, like Tor, can accurately make such difficult claims. Now with FLoC, your browser is still telling sites something about your behavior. Google cannot equate grouping users into advertising cohorts with “anonymity.”

This experiment is irresponsible and antagonistic to users. FLoC, with marginal improvements on privacy, is riddled with issues, and yet is planned to be rolled out to millions of users around the world with no proper notification, opt-in consent, or meaningful individual opt-out at launch.

This is not just one more Chrome experiment. This is a fundamental change to the browser and how people are exploited for their data. After all the pushback, concerns, and issues, the fact that Google has chosen to ignore the warnings is telling of where the company stands with regard to our privacy.

Try it!

Andrés Arrieta

What Movie Studios Refuse to Understand About Streaming

2 months ago

The longer we live in the new digital world, the more we are seeing it replicate systemic issues we’ve been fighting for decades. In the case of movie studios, what we’ve seen in the last few years in streaming mirrors what happened in the 1930s and ‘40s, when a small group of movie studios also controlled the theaters that showed their films. And by 1948, the actions of the studios were deemed violations of antitrust law, resulting in a consent decree. The Justice Department ended that decree in 2019 under the theory that the remaining studios could not “reinstate their cartel.” Maybe not in physical movie theaters. But online is another story.

Back in the ‘30s and ‘40s, the problem was that the major film studios—including Warner Bros. and Universal which exist to this day—owned everything related to the movies they made. They had everyone involved on staff under exclusive and restrictive contracts. They owned the intellectual property. They even owned the places that processed the physical film. And, of course, they owned the movie theaters.

In 1948, the studios were forced to sell off their stakes in movie theaters and chains, having lost in the Supreme Court.

The benefits for audiences were pretty clear. The old system had theaters scheduling showings so that they wouldn’t overlap with each other, so that you could not see a movie at the most convenient theater and most convenient time for you. Studios were also forcing theaters to buy their entire slates of movies without seeing them (called “blind buying”), instead of picking, say, the ones of highest quality or interest—the ones that would bring in audiences. And, of course, the larger chains and the theaters owned by the studios would get preferential treatment.

There is a reason the courts stopped this practice. For audiences, separating theaters from studios meant that their local theaters now had a variety of films, were more likely to have the ones they wanted to see, and would be showing them at the most convenient times. So they didn’t have to search listings for some arcane combination of time, location, and movie.

And now it is 2021. If you consume digital media, you may have noticed something… familiar.

The first wave of streaming services—Hulu, Netflix, iTunes, etc.—had a diversity of content from a number of different studios. And for the back catalog, the things that had already aired, services had all of the episodes available at once. Binge-watching was ascendant.

The value of these services to the audience was, like your local theater, convenience. You pay a set price and can pick from a diverse catalog to watch what you wanted, when you wanted, from the comfort of your home. As they did almost 100 years ago, studios suddenly realized the business opportunity presented in owning every step of the process of making entertainment. It’s just that those steps look different today than they did back then.

Instead of owning the film processing labs, they now own the infrastructure in the form of internet service providers (ISPs). AT&T owns Warner Brothers and HBO. Comcast owns Universal and NBC. And so on.

Instead of having creative people on restrictive contracts they… well, that they still do. Netflix regularly makes headlines for signing big names to exclusive deals. And studios buy up other studios and properties to lock down the rights to popular media. Disney in particular has bought up Star Wars and Marvel in a bid to put as much “intellectual property” under its exclusive control as possible, owning not just movie rights but every revenue stream a story can generate. As the saying goes, no one does to Disney what Disney did to the Brothers Grimm.

Instead of owning theater chains, studios have all launched their own streaming services. And as with theaters, a lot of the convenience has been stripped. Studios saw streaming making money and did not want to let others reap the rewards, so they’ve been pulling their works and putting them under the exclusive umbrella of their own streaming services.

Rather than having a huge catalog of diverse studio material, which is what made Netflix popular to begin with, convenience has been replaced with exclusivity. Of course, much like in the old days, the problem is that people don’t want everything a single studio offers. They want certain things. But a subscription fee isn’t for just what you want, it’s for everything. Much like the old theater chains, we are now blind buying the entire slate of whatever Disney, HBO, Amazon, etc. are offering.

And, of course, they can’t take the chance that we’ll pay the monthly fee once, watch what we’re looking for, and cancel. So a lot of these exclusives are no longer released in binge-watching mode, but staggered to keep us paying every month for new installments. Which is how the new world of streaming is becoming a hybrid of the old world of cable TV and movie theaters.

To watch something online legally these days is a very frustrating search of many, many services. The hope you have is the thing you want is on one of the services you already pay for and not on a new one. Sometimes, you’re looking for something that was on a service you paid for, but has now been locked into another one. Instead of building services that provide the convenience audiences want—with competition driving services to make better and better products for audiences—the value is now in making something rare. Something that can only be found on your service. And even if it is not good, it is at least tied to a popular franchise or some other thing people do not want to be left out of.

Instead of building better services—faster internet access, better interfaces, better content—the model is all based on exclusive control. Many Americans don’t have a choice in their broadband provider, a monopoly ISPs jealously guard rather than building a service so good we’d pick it on purpose. Instead of choosing the streaming service with the best price or library or interface, we have to pay all of them. Our old favorites are locked down, so we can’t access everything in one place anymore. New things set in our favorite worlds are likewise locked down to certain services, and sometimes even to certain devices. And creators we like? Also locked into exclusive contracts at certain services.

And the thing is, we know from history that this isn’t what consumers want. We know from the ‘30s and ‘40s that this kind of vertical integration is not good for creativity or for audiences. We know from the recent past that convenient, reasonably-priced, and legal internet services are what users want and will use. So we very much know that this system is untenable and anticompetitive, that it can encourage copyright infringement and drives the growth of reactionary draconian copyright laws that hurt innovators and independent creators. We also know what works.

Antitrust enforcers back in the ‘30s and ‘40s recognized that a system like this should not exist and put a stop to it. Breaking the studios’ cartel in the ‘40s led to more independent movie theaters, more independent studios, and more creativity in movies in general. So why have we let this system regrow itself online?

Katharine Trendacosta

Organizations Call on President Biden to Rescind President Trump’s Executive Order that Punished Online Social Media for Fact-Checking

2 months ago

President Joe Biden should rescind a dangerous and unconstitutional Executive Order issued by President Trump that continues to threaten internet users’ ability to obtain accurate and truthful information online, six organizations wrote in a letter sent to the president on Wednesday.

The organizations, Rock The Vote, Voto Latino, Common Cause, Free Press, Decoding Democracy, and the Center for Democracy & Technology, pressed Biden to remove his predecessor’s “Executive Order on Preventing Online Censorship” because “it is a drastic assault on free speech designed to punish online platforms that fact-checked President Trump.”

The organizations filed lawsuits to strike down the Executive Order last year, with Rock The Vote, Voto Latino, Common Cause, Free Press, and Decoding Democracy’s challenge currently on appeal in the U.S. Court of Appeals for the Ninth Circuit. The Center for Democracy & Technology’s appeal is currently pending in the U.S. Court of Appeal for the D.C. Circuit. (Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in Rock The Vote v. Trump.)

As the letter explains, Trump issued the unconstitutional Executive Order in retaliation for Twitter fact-checking May 2020 tweets spreading false information about mail-in voting. The Executive Order issued two days later sought to undermine a key law protecting internet users’ speech, 47 U.S.C. § 230 (“Section 230”) and punish online platforms, including by directing federal agencies to review and potentially stop advertising on social media and kickstarting a federal rulemaking to re-interpret Section 230. From the letter:

His actions made clear that the full force of the federal government would be brought down on those whose speech he did not agree with, in an effort to coerce adoption of his own views and to prevent the dissemination of accurate information about voting.

As the letter notes, despite President Biden eliminating other Executive Orders issued by Trump, the order targeting online services remains active. Biden’s inaction is troubling because the Executive Order “threatens democracy and the voting rights of people who have been underrepresented for generations,” the letter states.

“Thus, your Administration is in the untenable position of defending an unconstitutional order that was issued with the clear purpose of chilling accurate information about the 2020 election and undermining public trust in the results,” the letter continues.

The letter concludes:

Eliminating this egregiously unconstitutional hold-over from the prior Administration vindicates the Constitution’s protections for both online services and the users who rely on them for accurate, truthful information about voting rights.

Related Cases: Rock the Vote v. Trump
Aaron Mackey

India’s Strict Rules For Online Intermediaries Undermine Freedom of Expression

2 months ago

India has introduced draconian changes to its rules for online intermediaries, tightening government control over the information ecosystem and what can be said online. It has created rules that seek to restrict social media companies and other content hosts from coming up with their own moderation policies, including those framed to comply with international human rights obligations. The new “Intermediary Guidelines and Digital Media Ethics Code” (2021 Rules) have already been used in an attempt to censor speech about the government. Within days of being published, the rules were used by a state in which the ruling Bharatiya Janata Party is in power to issue a legal notice to an online news platform that has been critical of the government. The legal notice was withdrawn almost immediately after public outcry, but served as a warning of how the rules can be used.

The 2021 Rules, ostensibly created to combat misinformation and illegal content, substantially revise India’s intermediary liability scheme. They were notified as rules under the Information Technology Act 2000, replacing the 2011 Intermediary Rules.

New Categories of Intermediaries

The 2021 Rules create two new subsets of intermediaries: “social media intermediaries” and “significant social media intermediaries,” the latter of which are subject to more onerous regulations. The due diligence requirements for these companies include having proactive speech monitoring, compliance personnel who reside in India, and the ability to trace and identify the originator of a post or message.

“Social media intermediaries” are defined broadly, as entities which primarily or solely “enable online interaction between two or more users and allow them to create, upload, share, disseminate, modify or access information using its services.” Obvious examples include Facebook, Twitter, and YouTube, but the definition could also include search engines and cloud service providers, which are not social media in a strict sense.

“Significant social media intermediaries” are those with registered users in India above a 5 million threshold. But the 2021 Rules also allow the government to deem any “intermediary” - including telecom and internet service providers, web-hosting services, and payment gateways - a ‘significant’ social media intermediary if it creates a “material risk of harm” to the sovereignty, integrity, and security of the state, friendly relations with Foreign States, or public order. For example, a private messaging app can be deemed “significant” if the government decides that the app allows the “transmission of information” in a way that could create a “material risk of harm.” The power to deem ordinary intermediaries as significant also encompasses ‘parts’ of services, which are “in the nature of an intermediary” - like Microsoft Teams and other messaging applications.

New  ‘Due Diligence’ Obligations

The 2021 Rules, like their predecessor 2011 Rules, enact a conditional immunity standard. They lay out an expanded list of due diligence obligations that intermediaries must comply with in order to avoid being held liable for content hosted on their platforms.

Intermediaries are required to incorporate content rules—designed by the Indian government itself—into their policies, terms of service, and user agreements. The 2011 Rules contained eight categories of speech that intermediaries must notify their users not to “host, display, upload, modify, publish, transmit, store, update or share.” These include content that violates Indian law, but also many vague categories that could lead to censorship of legitimate user speech. By complying with government-imposed restrictions, companies cannot live up to their responsibility to respect international human rights, in particular freedom of expression, in their daily business conduct. 

Strict Turnaround for Content Removal

The 2021 Rules require all intermediaries to remove restricted content within 36 hours of obtaining actual knowledge of its existence, taken to mean a court order or notification from a government agency. The law gives non-judicial government bodies great authority to compel intermediaries to take down restricted content. Platforms that disagree with or challenge government orders face penal consequences under the Information Technology Act and criminal law if they fail to comply.

The Rules impose strict turnaround timelines for responding to government orders and requests for data. Intermediaries must provide information within their control or possession, or ‘assistance,’ within 72 hours to government agencies for a broad range of purposes: verification of identity, or the prevention, detection, investigation, or prosecution of offenses or for cybersecurity incidents. In addition, intermediaries are required to remove or disable, within 24 hours of receiving a complaint, non-consensual sexually explicit material or material in the “nature of impersonation in an electronic form, including artificially morphed images of such individuals.” The deadlines do not provide sufficient time to assess complaints or government orders. To meet them, platforms will be compelled to use automated filter systems to identify and remove content. These error-prone systems can filter out legitimate speech and are a threat to users' rights to free speech and expression.

Failure to comply with these rules could lead to severe penalties, such as a jail term of up to seven years. In the past, the Indian government has threatened company executives with prosecution - as, for instance, when they served a legal notice on Twitter, asking the company to explain why recent territorial changes in the state of Kashmir were not reflected accurately on the platform’s services. The notice threatened to block Twitter or imprison its executives if a “satisfactory” explanation was not furnished. Similarly, the government threatened Twitter executives with imprisonment when they reinstated content about farmer protests that the government had ordered them to take down.

Additional Obligations for Significant Social Media Intermediaries

On a positive note, the Rules require significant social media intermediaries to have transparency and due process rules in place for content takedowns. Companies must notify users when their content is removed, explain why it was taken down, and provide an appeals process.

On the other hand, the 2021 Rules compel providers to appoint an Indian resident “Chief Compliance Officer,” who will be held personally liable in any proceedings relating to non-compliance with the rules, and a “Resident Grievance Officer” responsible for responding to users’ complaints and government and court orders. Companies must also appoint a resident employee to serve as a contact person for coordination with law enforcement agencies. With more executives residing in India, where they could face prosecution, intermediaries may find it difficult to challenge or resist arbitrary and disproportionate government orders.

Proactive Monitoring

Significant social media intermediaries are called on to “endeavour to deploy technology-based measures,” including automated tools or other mechanisms, to proactively identify certain types of content. This includes information depicting rape or child sexual abuse and content that has previously been removed for violating rules. The stringent provisions in Rule 2021 already encourage over-removal of content; requiring intermediaries to deploy automated filters will likely result in more takedowns.

Encryption and Traceability Requirements

The Indian government has been wrangling with messaging app companies—most famously WhatsApp—for several years now, demanding “traceability” of the originators of forwarded messages. The demand first emerged in the context of a series of mob lynchings in India, triggered by rumors that went viral on WhatsApp. Subsequently, petitions were filed in Indian courts seeking to link social networking accounts with the users’ biometric identity (Aadhar) numbers. Although the court ruled against the proposal, expert opinions supplied by a member of the Prime Minister’s scientific advisory committee suggested technical measures to enable traceability on end-to-end encrypted platforms.

Because of their privacy and security features, some messaging systems don’t learn or record the history of who first created particular content that was then forwarded by others, a state of affairs that the Indian government and others have found objectionable. The 2021 Rules represent a further escalation of this conflict, requiring private messaging intermediaries to “enable the identification of the first originator of the information” upon a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received ,or stored in any computer resource). If the first originator of a message is located outside the territory of India, the private messaging app will be compelled to identify the first originator of that information within India.

The 2021 Rules place various limitations on these court orders, namely they can only be issued for serious crimes. However, limitations will not solve the core problem with this proposal: A technical mandate for companies to reengineer or re-design messaging services to comply with the government's demand to identify the originator of a message.


The 2021 Rules were fast-tracked without public consultation or a pre-legislative consultation, where the government seeks recommendations from stakeholders in a transparent process. They will have profound implications for the privacy and freedom of expression of Indian users. They restrict companies’ discretion in moderating their own platforms and create new possibilities for government surveillance of citizens. These rules threaten the idea of a free and open internet built on a bedrock of international human rights standards.

Katitza Rodriguez

The EU Online Terrorism Regulation: a Bad Deal

2 months ago

On 12 September 2018, the European Commission presented a proposal for a regulation on preventing the dissemination of terrorist content online—dubbed the Terrorism Regulation, or TERREG for short—that contained some alarming ideas. In particular, the proposal included an obligation for platforms to remove potentially terrorist  content within one hour, following an order from national competent authorities. 

Ideas such as this one have been around for some time already. In 2016, we first wrote about the European Commission’s attempt to create a voluntary agreement for companies to remove certain content (including terrorist expression) within 24 hours, and Germany’s Network Enforcement Act (NetzDG) requires the same. NetzDG has spawned dozens of copycats throughout the world, including in countries like Turkey with far fewer protections for speech, and human rights more generally.

Beyond the one hour removal requirement, the TERREG also contained a broad definition of what constitutes terrorist content as “material that incites or advocates committing terrorist offences, promotes the activities of a terrorist group or provides instructions and techniques for committing terrorist offences”.  

Furthermore, it introduced a duty of care for all platforms to avoid being misused for the dissemination of terrorist content. This includes the requirement of taking proactive measures to prevent the dissemination of such content. These rules were accompanied by a framework of cooperation and enforcement. 

These aspects of the TERREG are particularly concerning, as research we’ve conducted in collaboration with other groups demonstrates that companies routinely make content moderation errors that remove speech that parodies or pushes back against terrorism, or documents human rights violations in countries like Syria that are experiencing war.

TERREG and human rights

TERREG was created  without real consultation of free expression and human rights groups and has serious repercussions for online expression. Even worse, the proposal was adopted based on political spin rather than evidence

Notably, in 2019, the EU Fundamental Rights Agency—tasked with an opinion by the EU parliament—expressed concern about the regulation. In particular, the FRA noted that the definition of terrorist content had to be modified as it was too wide and would interfere with freedom of expression rights. Also, “According to the FRA, the proposal does not guarantee the involvement by the judiciary and the Member States' obligation to protect fundamental rights online has to be strengthened.” 

Together with many other civil society groups, we voiced our deep concern over the proposed legislation and stressed that the new rules would pose serious potential threats to fundamental rights of privacy, freedom of expression.

The message to EU policymakers was clear:

  • Abolish the one-hour time frame for content removal, which is too tight for platforms and will lead to over removal of content;
  • Respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders;
  • Ensure due process and clarify that the legality of content be determined by a court or independent administrative authority;
  • Don’t impose the use of upload or re-upload filters (automated content recognition technologies) to services under the scope of the Regulation;
  • Exempt certain protected forms of expression, such as educational, artistic, journalistic, and research materials.

However, while responsible committees of the EU Parliament showed willingness to take the concerns of civil society groups into account, things looked more grim in Council, where government ministers from each EU country meet to discuss and adopt laws. During the closed-door negotiations between the EU-institutions to strike a deal, different versions of TERREG were discussed, which culminated in further letters by civil society groups, urging the lawmakers to ensure key safeguards on freedom of expressions and the rule of law.

Fortunately, civil society groups and fundamental rights-friendly MEPs in the Parliament were able to achieve some of their goals. For example, the agreement reached by the EU institutions includes exceptions for journalistic, artistic, and educational purposes. Another major improvement concerns the definition of terrorist content (now matching the narrower definition of the EU Directive on combating terrorism) and the option for host providers to invoke technical and operational reasons for non-complying with the strict one-hour removal obligation. And most importantly, the deal states that authorities cannot impose upload filters on platforms.

The Deal Is Still Not Good Enough

While civil society intervention has resulted in a series of significant improvements to the law, there is more work to be done. The proposed regulation still gives broad powers to national authorities, without judicial oversight, to censor online content that they deem to be “terrorism” anywhere in the EU, within a one-hour timeframe, and to incentivize companies to delete more content of their own volition. It further encourages the use of automated tools, without any guarantee of human oversight.

Now, a broad coalition of civil society organizations is voicing their concerns with the Parliament, which must agree to the deal for it to become law. EFF and others suggest that the Members of the European Parliament should vote against the adoption of the proposal. We encourage our followers to raise awareness about the implications of TERREG and reach out to their national members of the EU Parliament.

Jillian C. York

Victory for Fair Use: The Supreme Court Reverses the Federal Circuit in Oracle v. Google

2 months 1 week ago

In a win for innovation, the U.S. Supreme Court has held that Google’s use of certain Java Application Programming Interfaces (APIs) is a lawful fair use. In doing so, the Court reversed the previous rulings by the Federal Circuit and recognized that copyright only promotes innovation and creativity when it provides breathing room for those who are building on what has come before.

This decision gives more legal certainty to software developers’ common practice of using, re-using, and re-implementing software interfaces written by others, a custom that underlies most of the internet and personal computing technologies we use every day.

To briefly summarize over ten years of litigation: Oracle claims a copyright on the Java APIs—essentially names and formats for calling computer functions—and claims that Google infringed that copyright by using (reimplementing) certain Java APIs in the Android OS. When it created Android, Google wrote its own set of basic functions similar to Java (its own implementing code). But in order to allow developers to write their own programs for Android, Google used certain specifications of the Java APIs (sometimes called the “declaring code”).

APIs provide a common language that lets programs talk to each other. They also let programmers operate with a familiar interface, even on a competitive platform. It would strike at the heart of innovation and collaboration to declare them copyrightable.

EFF filed numerous amicus briefs in this case explaining why the APIs should not be copyrightable and why, in any event, it is not infringement to use them in the way Google did. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its first decision—that APIs are entitled to copyright protection—ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the internet.

Then the second decision made things worse. The Federal Circuit's first opinion had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and in 2018 the same three Federal Circuit judges reversed the jury's verdict and held that Google had not engaged in fair use as a matter of law.

Fortunately, the Supreme Court agreed to review the case. In a 6-2 decision, Justice Breyer explained why Google’s use of the Java APIs was a fair use as a matter of law. First, the Court discussed some basic principles of the fair use doctrine, writing that fair use “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”

Furthermore, the court stated:

Fair use “can play an important role in determining the lawful scope of a computer program copyright . . . It can help to distinguish among technologies. It can distinguish between expressive and functional features of computer code where those features are mixed. It can focus on the legitimate need to provide incentives to produce copyrighted material while examining the extent to which yet further protection creates unrelated or illegitimate harms in other markets or to the development of other products.”

In doing so, the decision underlined the real purpose of copyright: to incentivize innovation and creativity. When copyright does the opposite, fair use provides an important safety valve.

Justice Breyer then turned to the specific fair use statutory factors. Appropriately for a functional software copyright case, he first discussed the nature of the copyrighted work. The Java APIs are a “user interface” that allow users (here the developers of Android applications) to “manipulate and control” task-performing computer programs. The Court observed that the declaring code of the Java APIs differs from other kinds of copyrightable computer code—it’s “inextricably bound together” with uncopyrightable features, such as a system of computer tasks and their organization and the use of specific programming commands (the Java “method calls”). As the Court noted:

Unlike many other programs, its value in significant part derives from the value that those who do not hold copyrights, namely, computer programmers, invest of their own time and effort to learn the API’s system. And unlike many other programs, its value lies in its efforts to encourage programmers to learn and to use that system so that they will use (and continue to use) Sun-related implementing programs that Google did not copy.

Thus, since the declaring code is “further than are most computer programs (such as the implementing code) from the core of copyright,” this factor favored fair use.

Justice Breyer then discussed the purpose and character of the use. Here, the opinion shed some important light on when a use is “transformative” in the context of functional aspects of computer software, creating something new rather than simply taking the place of the original. Although Google copied parts of the Java API “precisely,” Google did so to create products fulfilling new purposes and to offer programmers “a highly creative and innovative tool” for smartphone development. Such use “was consistent with that creative ‘progress’ that is the basic constitutional objective of copyright itself.”

The Court discussed “the numerous ways in which reimplementing an interface can further the development of computer programs,” such as allowing different programs to speak to each other and letting programmers continue to use their acquired skills. The jury also heard that reuse of APIs is common industry practice. Thus, the opinion concluded that the “purpose and character” of Google’s copying was transformative, so the first factor favored fair use.

Next, the Court considered the third fair use factor, the amount and substantiality of the portion used. As a factual matter in this case, the 11,500 lines of declaring code that Google used were less than one percent of the total Java SE program. And even the declaring code that Google used was to permit programmers to utilize their knowledge and experience working with the Java APIs to write new programs for Android smartphones. Since the amount of copying was “tethered” to a valid and transformative purpose, the “substantiality” factor favored fair use.

Finally, several reasons led Justice Breyer to conclude that the fourth factor, market effects, favored Google. Independent of Android’s introduction in the marketplace, Sun didn’t have the ability to build a viable smartphone. And any sources of Sun’s lost revenue were a result of the investment by third parties (programmers) in learning and using Java. Thus, “given programmers’ investment in learning the Sun Java API, to allow enforcement of Oracle’s copyright here would risk harm to the public. Given the costs and difficulties of producing alternative APIs with similar appeal to programmers, allowing enforcement here would make of the Sun Java API’s declaring code a lock limiting the future creativity of new programs.” This “lock” would interfere with copyright’s basic objectives.

The Court concluded that “where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.”

The Supreme Court left for another day the issue of whether functional aspects of computer software are copyrightable in the first place. Nevertheless, we are pleased that the Court recognized the overall importance of fair use in software cases, and the public interest in allowing programmers, developers, and other users to continue to use their acquired knowledge and experience with software interfaces in subsequent platforms.

Related Cases: Oracle v. Google
Michael Barclay

553,000,000 Reasons Not to Let Facebook Make Decisions About Your Privacy

2 months 1 week ago

Another day, another horrific Facebook privacy scandal. We know what comes next: Facebook will argue that losing a lot of our data means bad third-party actors are the real problem that we should trust Facebook to make more decisions about our data to protect against them. If history is any indication, that’ll work. But if we finally wise up, we’ll respond to this latest crisis with serious action: passing America’s long-overdue federal privacy law (with a private right of action) and forcing interoperability on Facebook so that its user/hostages can escape its walled garden.

Facebook created this problem, but that doesn’t make the company qualified to fix it, nor does it mean we should trust them to do so. 

In January 2021, Motherboard reported on a bot that was selling records from a 500 million-plus person trove of Facebook  data, offering phone numbers and other personal information. Facebook said the data had been scraped by using a bug that was available as early as 2016, and which the company claimed to have patched in 2019. Last week, a dataset containing 553 million Facebook users’ data—including phone numbers, full names, locations, email addresses, and biographical informationwas published for free online. (It appears this is the same dataset Motherboard reported on in January). More than half a billion current and former Facebook users are now at high risk of various kinds of fraud.

While this breach is especially ghastly, it’s also just another scandal for Facebook, a company that spent decades pursuing deceptive and anticompetitive tactics to amass largely nonconsensual dossiers on its 2.6 billion users as well as many billions of people who have no Facebook, Instagram or WhatsApp account, including many who never had an account with Facebook.

Based on past experience, Facebook’s next move is all but inevitable: after regretting this irretrievable data breach, the company will double down on the tactics that lock its users into its walled gardens, in the name of defending their privacy. That’s exactly what the company did during the Cambridge Analytica fiasco, when it used the pretense of protecting users from dangerous third-parties to lock out competitors, including those who use Facebook’s APIs to help users part ways with the service without losing touch with their friends, families, communities, and professional networks.

According to Facebook, the data in this half-billion-person breach was harvested thanks to a bug in its code. We get that. Bugs happen. That’s why we’re totally unapologetic about defending the rights of security researchers and other bug-hunters who help discover and fix those bugs. The problem isn’t that a Facebook programmer made a mistake: the problem is that this mistake was so consequential.

Facebook doesn’t need all this data to offer its users a social networking experience: it needs that data so it can market itself to advertisers, who paid the company $84.1 billion in 2020. It warehoused that data for its own benefit, in full knowledge that bugs happen, and that a bug could expose all of that data, permanently. 

Given all that, why do users stay on Facebook? For many, it’s a hostage situation: their friends, families, communities, and professional networks are on Facebook, so that’s where they have to be. Meanwhile, those friends, family members, communities, and professional networks are stuck on Facebook because their friends are there, too. Deleting Facebook comes at a very high cost.

It doesn’t have to be this way. Historically, new online services—including, at one time, Facebook—have smashed big companies’ walled gardens, allowing those former user-hostages to escape from dominant services but still exchange messages with the communities they left behind, using techniques like scraping, bots, and other honorable tools of reverse-engineering freedom fighters. 

Facebook has gone to extreme lengths to keep this from ever happening to its services. Not only has it sued rivals who gave its users the ability to communicate with their Facebook friends without subjecting themselves to Facebook’s surveillance, the company also bought out successful upstart rivals specifically because it knew it was losing users to them. It’s a winning combination: use the law to prevent rivals from giving users more control over their privacy, use the monopoly rents those locked-in users generate to buy out anyone who tries to compete with you.

Those 553,000,000 users whose lives are now an eternal open book to the whole internet never had a chance. Facebook took them hostage. It harvested their data. It bought out the services they preferred over Facebook. 

And now that 553,000,000 people should be very, very angry at Facebook, we need to watch carefully to make sure that the company doesn’t capitalize on their anger by further increasing its advantage. As governments from the EU to the U.S. to the UK consider proposals to force Facebook to open up to rivals so that users can leave Facebook without shattering their social connections, Facebook will doubtless argue that such a move will make it impossible for Facebook to prevent the next breach of this type.

Facebook is also likely to weaponize this breach in its ongoing war against accountability: namely, against a scrappy group of academics and Facebook users. Ad Observer and Ad Observatory are a pair of projects from NYU’s Online Transparency Project that scrape the ads its volunteers are served by Facebook and places them in a public repository, where scholars, researchers, and journalists can track how badly Facebook is living up to its promise to halt paid political disinformation.

Facebook argues that any scraping—even highly targeted, careful, publicly auditable scraping that holds the company to account—is an invitation to indiscriminate mass-scraping of the sort that compromised the half-billion-plus users in the current breach. Instead of scraping its ads, the company says that its critics should rely on a repository that Facebook itself provides, and trust that the company will voluntarily reveal any breaches of its own policies.

From Facebook’s point of view, a half-billion person breach is a half-billion excuses not to open its walled garden or permit accountability research into its policies. In fact, the worse the breach, the more latitude Facebook will argue it should get: “If this is what happens when we’re not being forced to allow competitors and critics to interoperate with our system, imagine what will happen if these digital trustbusters get their way!”

Don’t be fooled. Privacy does not come from monopoly. No one came down off a mountain with two stone tablets, intoning “Thou must gather and retain as much user data as is technologically feasible!” The decision to gobble up all this data and keep it around forever has very little to do with making Facebook a nice place to chat with your friends and everything to do with maximizing the company’s profits. 

Facebook’s data breach problems  are the inevitable result of monopoly, in particular the knowledge that it can heap endless abuses on its users and retain them. Even if they resign from Facebook, they’re going to end up on acquired Facebook subsidiaries like Instagram or WhatsApp, and even if they don’t, Facebook will still get to maintain its dossiers on their digital lives.

Facebook’s breaches are proof that we shouldn’t trust Facebook—not that we should trust it more. Creating a problem in no way qualifies you to solve that problem. As we argued in our January white-paper, Privacy Without Monopoly: Data Protection and Interoperability, the right way to protect users is with a federal privacy law with a private right of action.

Right now, Facebook’s users have to rely on Facebook to safeguard their interests. That doesn’t just mean crossing their fingers and hoping Facebook won’t make another half-billion-user blunder—it also means hoping that Facebook won’t intentionally disclose their information to a third party as part of its normal advertising activities. 

Facebook is not qualified to decide what the limits on its own data-processing should be. Those limits should come from democratically accountable legislatures, not autocratic billionaire CEOs. America is sorely lacking a federal privacy law, particularly one that empowers internet users to sue companies that violate their privacy. A privacy law with a private right of action would mean that you wouldn’t be hostage to the self-interested privacy decisions of vast corporations, and it would mean that when they did you dirty, you could get justice on your own, without having to convince a District Attorney or Attorney General to go to bat for you.

A federal privacy law with a private right of action would open a vast possible universe of new interoperable services that plugged into companies like Facebook, allowing users to leave without cancelling their lives; these new services would have to play by the federal privacy rules, too.

That’s not what we’re going to hear from Facebook, though: in Facebookland, the answer to their abuse of our trust is to give them more of our trust; the answer to the existential crisis of their massive scale is to make them even bigger. Facebook created this problem, and they are absolutely incapable of solving it.

Cory Doctorow

First Circuit Upholds First Amendment Right to Secretly Audio Record the Police

2 months 1 week ago

EFF applauds the U.S. Court of Appeals for the First Circuit for holding that the First Amendment protects individuals when they secretly audio record on-duty police officers. EFF filed an amicus brief in the case, Martin v. Rollins, which was brought by the ACLU of Massachusetts on behalf of two civil rights activists. This is a victory for people within the jurisdiction of the First Circuit (Massachusetts, Maine, New Hampshire, Puerto Rico and Rhode Island) who want to record an interaction with police officers without exposing themselves to possible reprisals for visibly recording.

The First Circuit struck down as unconstitutional the Massachusetts anti-eavesdropping (or wiretapping) statute to the extent it prohibits the secret audio recording of police officers performing their official duties in public. The law generally makes it a crime to secretly audio record all conversations without consent, even where participants have no reasonable expectation of privacy, making the Massachusetts statute unique among the states.

The First Circuit had previously held in Glik v. Cunniffe (2011) that the plaintiff had a First Amendment right to record police officers arresting another man in Boston Common. Glik had used his cell phone to openly record both audio and video of the incident. The court had held that the audio recording did not violate the Massachusetts anti-eavesdropping statute’s prohibition on secret recording because Glik’s cell phone was visible to officers.

Thus, following Glik, the question remained open as to whether individuals have a First Amendment right to secretly audio record police officers, or if instead they could be punished under the Massachusetts statute for doing so. (A few years after Glik, in Gericke v. Begin (2014), the First Circuit held that the plaintiff had a First Amendment right to openly record the police during someone else’s traffic stop to the extent she wasn’t interfering with them.)

The First Circuit in Martin held that recording on-duty police officers, even secretly, is protected newsgathering activity similar to that of professional reporters that “serve[s] the very same interest in promoting public awareness of the conduct of law enforcement—with all the accountability that the provision of such information promotes.” The court further explained that recording “play[s] a critical role in informing the public about how the police are conducting themselves, whether by documenting their heroism, dispelling claims of their misconduct, or facilitating the public’s ability to hold them to account for their wrongdoing.”

The ability to secretly audio record on-duty police officers is especially important given that many officers retaliate against civilians who openly record them, as happened in a recent Tenth Circuit case. The First Circuit agreed with the Martin plaintiffs that secret recording can be a “better tool” to gather information about police officers, because officers are less likely to be disrupted and, more importantly, secret recording may be the only way to ensure that recording “occurs at all.” The court stated that “the undisputed record supports the Martin Plaintiffs’ concern that open recording puts them at risk of physical harm and retaliation.”

Finally, the court was not persuaded that the privacy interests of civilians who speak with or near police officers are burdened by secretly audio recording on-duty police officers. The court reasoned that “an individual’s privacy interests are hardly at their zenith in speaking audibly in a public space within earshot of a police officer.”

Given the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has been recognized by a growing number of federal jurisdictions. In addition to the First Circuit, federal appellate courts in the Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right.

Disappointingly, the Tenth Circuit recently dodged the question. For all the reasons in the First Circuit’s Martin decision, the Tenth Circuit erred, and the remaining circuits must recognize the First Amendment right to record on-duty police officers as the law of the land.

Sophia Cope

Maine Should Take this Chance to Defund the Local Intelligence Fusion Center

2 months 1 week ago

Maine state representative Charlotte Warren has introduced LD1278 (HP938), or An Act To End the Maine Information and Analysis Center Program, a bill that would defund the Maine Information and Analysis Center (MIAC), also known as Maine’s only fusion center. EFF is happy to support this bill in hopes of defunding an unnecessary, intrusive, and often-harmful piece of the U.S. surveillance regime. You can read the full text of the bill here

Fusion centers are yet another unnecessary cog in the surveillance state—and one that serves the intrusive function of coordinating surveillance activities and sharing information between federal law enforcement, the national security surveillance apparatus, and local and state police. Across the United States, there are at least 78 fusion centers that were formed by the Department of Homeland Security in the wake of the war on terror and the rise of post-9/11 mass surveillance. Since their creation, fusion centers have been hammered by politicians, academics, and civil society groups for their ineffectiveness, dysfunction, mission creep, and unregulated tendency to veer into political policing. As scholar Brendan McQuade wrote in his book Pacifying the Homeland: Intelligence Fusion and Mass Supervision,

“On paper, fusion centers have the potential to organize dramatic surveillance powers. In practice however, what happens at fusion centers is circumscribed by the politics of law enforcement. The tremendous resources being invested in counterterrorism and the formation of interagency intelligence centers are complicated by organization complexity and jurisdictional rivalries. The result is not a revolutionary shift in policing but the creation of uneven, conflictive, and often dysfunctional intelligence-sharing systems.”

But in recent months, the dysfunction of fusion centers and the ease with which they sink into policing First Amendment-protected activities have been on full display. After a series of leaks that revealed communications from inside police departments, fusion centers, and law enforcement agencies across the country, MIAC came under particular scrutiny for sharing dubious intelligence generated by far-right wing social media accounts with local law enforcement. Specifically, the Maine fusion center helped perpetuate disinformation that stacks of bricks and stones had been strategically placed throughout a Black Lives Matter protest as part of a larger plan for destruction, and caused police to plan and act accordingly. This was, to put it plainly, a government intelligence agency spreading fake news that could have deliberately gotten people exercising their First Amendment rights hurt. This is in addition to a whistleblower lawsuit from a state trooper that alleged the fusion center routinely violated civil rights.  

The first decade of the twenty-first century is characterized by a blank check to grow and expand the infrastructure that props up mass surveillance. Fusion centers are at the very heart of that excess. They have proven themselves to be unreliable and even harmful to the people the national security apparatus claims to want to protect. Why do states continue to fund intelligence fusion when, at its best, it enacts political policing that poses an existential threat to immigrants, activists, and protestors—and at worst, it actively disseminates false information to police? 

We echo the sentiments of Representative Charlotte Warren and other dedicated Maine residents who say it's time to shift MIAC's nearly million-dollar per year budget towards more useful programs. Maine, pass LD1278 and defund the Maine Information and Analysis Center. 

Matthew Guariglia

Ethos Capital Is Grabbing Power Over Domain Names Again, Risking Censorship-For-Profit. Will ICANN Intervene?

2 months 1 week ago

Ethos Capital is at it again. In 2019, this secretive private equity firm that includes insiders from the domain name industry tried to buy the nonprofit that runs the .ORG domain. A huge coalition of nonprofits and users spoke out. Governments expressed alarm, and ICANN (the entity in charge of the internet’s domain name system) scuttled the sale. Now Ethos is buying a controlling stake in Donuts, the largest operator of “new generic top-level domains.” Donuts controls a large swathe of the domain name space. And through a recent acquisition, it also runs the technical operations of the .ORG domain. This acquisition raises the threat of increased censorship-for-profit: suspending or transferring domain names against the wishes of the user at the request of powerful corporations or governments. That’s why we’re asking the ICANN Board to demand changes to Donuts’ registry contracts to protect its users’ speech rights.

Donuts is big. It operates about 240 top-level domains, including .charity, .community, .fund, .healthcare, .news, .republican, and .university. And last year it bought Afilias, another registry company that also runs the technical operations of the .ORG domain. Donuts already has questionable practices when it comes to safeguarding its users’ speech rights. Its contracts with ICANN contain unusual provisions that give Donuts an unreviewable and effectively unlimited right to suspend domain names—causing websites and other internet services to disappear.

Relying on those contracts, Donuts has cozied up to powerful corporate interests at the expense of its users. In 2016, Donuts made an agreement with the Motion Picture Association to suspend domain names of websites that MPA accused of copyright infringement, without any court process or right of appeal. These suspensions happen without transparency: Donuts and MPA haven’t even disclosed the number of domains that have been suspended through their agreement since 2017.

Donuts also gives trademark holders the ability to pay to block the registration of domain names across all of Donuts’ top-level domains. In effect, this lets trademark holders “own” words and prevent others from using them as domain names, even in top-level domains that have nothing to do with the products or services for which a trademark is used. It’s a legal entitlement that isn’t part of any country’s trademark law, and it was considered and rejected by ICANN’s multistakeholder policy-making community.

These practices could accelerate and expand with Ethos Capital at the helm. As we learned last year during the fight for .ORG, Ethos expects to deliver high returns to its investors while preserving its ability to change the rules for domain name registrants, potentially in harmful ways. Ethos refused meaningful dialogue with domain name users, instead proposing an illusion of public oversight and promoting it with a slick public relations campaign. And private equity investors have a sordid record of buying up vital institutions like hospitals, burdening them with debt, and leaving them financially shaky or even insolvent.

Although Ethos’s purchase of Donuts appears to have been approved by regulators, ICANN should still intervene. Like all registry operators, Donuts has contracts with ICANN that allow it to run the registry databases for its domains. ICANN should give this acquisition as much scrutiny as it gave Ethos’s attempt to buy .ORG. And to prevent Ethos and Donuts from selling censorship as a service at the expense of domain name users, ICANN should insist on removing the broad grants of censorship power from Donuts’ registry contracts. ICANN did the right thing last year when confronted with the takeover of .ORG. We hope it does the right thing again by reining in Ethos and Donuts.


Mitch Stoltz

Content Moderation Is A Losing Battle. Infrastructure Companies Should Refuse to Join the Fight

2 months 1 week ago

It seems like every week there’s another Big Tech hearing accompanied by a flurry of mostly bad ideas for reform. Two events set last week’s hubbub apart, both involving Facebook. First, Mark Zuckerberg took a new step in his blatant effort to use 230 reform to entrench Facebook’s dominance. Second, new reports are demonstrating, if further demonstration were needed, how badly Facebook is failing at policing the content on its platform with any consistency whatsoever. The overall message is clear: if content moderation doesn’t work even with the kind of resources Facebook has, then it won’t work anywhere.

Inconsistent Policies Harm Speech in Ways That Are Exacerbated the Further Along the Stack You Go

Facebook has been swearing for many months that it will do a better job of rooting out “dangerous content.” But a new report from the Tech Transparency Project demonstrates that it is failing miserably. Last August, Facebook banned some militant groups and other extremist movements tied to violence in the U.S. Now, Facebook is still helping expand the groups’ reach by automatically creating new pages for them and directing people who “like” certain militia pages to check out others, effectively helping these movements recruit and radicalize new members. 

These groups often share images of guns and violence, misinformation about the pandemic, and racist memes targeting Black Lives Matter activists. QAnon pages also remain live despite Facebook’s claim to have taken them down last fall. Meanwhile, a new leak of Facebook’s internal guidelines shows how much it struggles to come up with consistent rules for users living under repressive governments. For example, the company forbids “dangerous organizations”—including, but not limited to, designated terrorist organizations—but allows users in certain countries to praise mass murderers and “violent non-state actors” (designated militant groups engaged that do not target civilians) unless their posts contain an explicit reference to violence.

A Facebook spokesperson told the Guardian: “We recognise that in conflict zones some violent non-state actors provide key services and negotiate with governments – so we enable praise around those non-violent activities but do not allow praise for violence by these groups.”

The problem is not that Facebook is trying to create space for some speech – they should probably do more of that. But the current approach is just incoherent. Like other platforms, Facebook does not base its guidelines on international human rights frameworks, nor do the guidelines necessarily adhere to local laws and regulations. Instead, they seem to be based upon what Facebook policymakers think is best.

The capricious nature of the guidelines is especially clear with respect to LGBTQ+ content. For example, Facebook has limited use of the rainbow “like” button in certain regions, including the Middle East, ostensibly to keep users there safe. But in reality, this denies members of the LGBTQ+ community there the same range of expression as other users and is hypocritical given the fact that Facebook refuses to bend its "authentic names" policy to protect the same users.

Whatever Facebook’s intent, in practice, it is taking sides in a region that it doesn’t seem to understand. Or as Lebanese researcher Azza El Masri put it on Twitter: “The directive to let pro-violent/terrorist content up in Myanmar, MENA, and other regions while critical content gets routinely taken down shows the extent to which [Facebook] is willing to go to appease our oppressors.”

This is not the only example of a social media company making inconsistent decisions about what expression to allow. Twitter, for instance, bans alcohol advertising from every Arab country, including several (such as Lebanon and Egypt) where the practice is perfectly legal. Microsoft Bing once limited sexual search terms from the entire region, despite not being asked by governments to do so.

Now imagine the same kinds of policies being applied to internet access. Or website hosting. Or cloud storage.

All the Resources in the World Can’t Make Content Moderation Work at Scale

Facebook’s lopsided policies are deserving of critique and point to a larger problem that too much focus on specific policies misses: if Facebook, with the money to hire thousands of moderators, implement filters, and fund an Oversight Board can’t manage to develop and implement a consistent, coherent and transparent moderation policy, maybe we should finally admit that we can’t look to social media platforms to solve deep-seated political problems – and we should stop trying.

Even more importantly, we should call a halt to any effort to extend this mess beyond platforms. If two decades of experience with social media has taught us anything, it is that the companies are bad at creating and implementing consistent, coherent policies. But at least, when a social media company makes an error in judgement, its impact is relatively limited. But at the infrastructure level, however, those decisions necessarily hit harder and wider. If an internet service provider (ISP) shut down access to LGTBQ+ individuals using the same capricious whims as Facebook, it would be a disaster.

What Infrastructure Companies Can Learn

The full infrastructure of the internet, or the “full stack” is made up of a range of companies and intermediaries that range from consumer facing platforms like Facebook or Pinterest to ISPs, like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.

For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that have to function to get the content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about AWS at all. We are more aware of the content moderation decisionsand mistakesmade by the consumer facing platforms.

We have detailed many times the chilling effect and other problems with opaque, bad, or inconsistent content moderation decisions from companies like Facebook. But when ISPs or intermediaries decide to wade into the content moderation game and start blocking certain users and sites, it’s far worse. For one thing, many of these services have few, if any, competitors. For example, too many people in the United States and overseas only have one choice for an ISP. If the only broadband provider in your area cuts you off because they (or your government) didn’t like what you said onlineor what someone else whose name is on the account saidhow can you get back online? Further, at the infrastructure level, services usually cannot target their response narrowly. Twitter can shut down individual accounts; when those users migrate to Parler and continue to engage in offensive speech, AWS can only deny service to the entire site including speech that is entirely unobjectionable. And that is exactly why ISPs and intermediaries need to stay away from this fight entirely. The risks from getting it wrong at the infrastructure level are far too great.

It is easy to understand why repressive governments (and some advocates) want to pressure ISPs and intermediaries in the stack to moderate content: it is a broad, blunt and effective way to silence certain voices. Some intermediaries might also feel compelled to moderate aggressively in the hopes of staving off criticism down the line.  As last week’s hearing showed, this tactic will not work. The only way to avoid the pressure is to stake out an entirely different approach.

To be clear, in the United States, businesses have a constitutional right to decide what content they want to host. That’s why lawmakers who are tempted to pass laws to punish intermediaries beyond platforms in the stack for their content moderation decisions would face the same kind of First Amendment problems as any bill attempting to meddle with speech rights.

But, just because something is legally permissible does not mean it is the right thing to do, especially when implementation will vary depending on who is asking for it, when. Content moderation is empirically impossible to do well at scale; given the impact of the inevitable mistakes, ISPs and infrastructure intermediaries should not try. Instead, they should reject pressure to moderate like platforms, and clarify that they are much more like the local power company. If you wouldn’t want the power company shutting off service to a house just because someone doesn’t like what’s going on inside, you shouldn’t want a domain name registrar freezing a domain name because someone doesn’t like a site, or an ISP shutting down an account. And if you would hold the power company responsible for the behavior you don’t like just because that behavior relied on electricity, you shouldn’t hold an ISP or a domain name registrar or CDN, etc, responsible for behavior or speech that relies on their services either.  

If more than two decades of social media content moderation has taught us anything, it is that we cannot tech our way out of a fundamentally political problem. Social media companies have tried and failed to do so; beyond the platform, companies should refuse to replicate those failures.

Corynne McSherry

The FCC Wants Your Broadband Horror Stories: You Know What to Do

2 months 1 week ago

At long last, the Federal Communications Commission (FCC) is asking for your broadband experiences. When you submit your experiences here, you will let the FCC know whether you have been adequately served by your internet service provider (ISP). The feedback you provide informs future broadband availability as well as funding, standards, and federal policy.

Traditionally, the FCC credulously relied on monopolistic ISPs to self-report coverage and service, which allowed these giant businesses to paint a deceptive, deeply flawed portrait of broadband service where everything was generally just fine. It was not fine. It is not fine. The pandemic demonstrated how millions are left behind or stuck with second-rate service, in a digital age where every aspect of a thriving, prosperous life turns on the quality of your broadband. Just look at the filings from Frontier’s recent bankruptcy and see how mismanagement, misconduct, and poor service are standard industry practice. It's not just Frontier, either: recurring horror stories of ISPs not delivering upon their basic promise of service by upload throttling customers or even harassing customers seeking to cancel service demonstrate that ISPs don't think of us as customers, but rather as captives to their monopolies.

Last Wednesday, the White House announced a plan to invest $100 billion in building and improving high-speed broadband infrastructure. It's overdue. Last February, Consumer Reports released a survey which found that 75% of Americans say they rely on the internet to carry out their daily activities seven days a week. EFF has long advocated for broadband for all, and today we are part of a mass movement demanding universal and affordable access for all people so that they may be full participants in twenty-first century society.

Trump's FCC, under the chairmanship of former Verizon executive Ajit Pai, threw away citizen comments opposing the 2017 net neutrality repeal. It's taken years to learn what was in those comments.

Now that the FCC is finally seriously asking for citizen comments, this is your chance to let them know just how badly you’ve been treated, and to demand an end to the long, miserable decades when the monopolistic ISPs got away with charging sky-high rates for some of the worse service among the advanced broadband markets. We all deserve better. 

Submit Your Comments

Chao Liu

Tenth Circuit Misses Opportunity to Affirm the First Amendment Right to Record the Police

2 months 1 week ago

We are disappointed that the U.S. Court of Appeals for the Tenth Circuit this week dodged a critical constitutional question: whether individuals have a First Amendment right to record on-duty police officers.

EFF had filed an amicus brief in the case, Frasier v. Evans, asking the court to affirm the existence of the right to record the police in the states under the court’s jurisdiction (Colorado, Oklahoma, Kansas, New Mexico, Wyoming, and Utah, and those portions of the Yellowstone National Park extending into Montana and Idaho).

Frasier had used his tablet to record Denver police officers engaging in what he believed to be excessive force: the officers repeatedly punched a suspect in the face to get drugs out of his mouth as his head bounced off the pavement, and they tripped his pregnant girlfriend. Frasier filed a First Amendment retaliation claim against the officers for detaining and questioning him, searching his tablet, and attempting to delete the video.

Qualified Immunity Strikes Again

In addition to refusing to affirmatively recognize the First Amendment right to record the police, the Tenth Circuit held that even if such a right did exist today, the police officers who sought to intimidate Frasier could not be held liable for violating his constitutional right because they had “qualified immunity”—that is, because the right to record the police wasn’t clearly established in the Tenth Circuit at the time of the incident in August 2014.

The court held not only that the right had not been objectively established in federal case law, but also that it was irrelevant that the officers subjectively knew the right existed based on trainings they received from their own police department. Qualified immunity is a pernicious legal doctrine that often allows culpable government actors to avoid accountability for violations of constitutional rights.

Thus, the police officers who clearly retaliated against Frasier are off the hook, even though “the Denver Police Department had been training its officers since February 2007” that individuals have a First Amendment right to record them, and that “each of the officers in this case had testified unequivocally that, as of August 2014, they were aware that members of the public had the right to record them.”

Recordings of Police Officers Are Critical for Accountability

As we wrote last year in our guide to recording police officers, “[r]ecordings of police officers, whether by witnesses to an incident with officers, individuals who are themselves interacting with officers, or by members of the press, are an invaluable tool in the fight for police accountability. Often, it’s the video alone that leads to disciplinary action, firing, or prosecution of an officer.”

This is particularly true in the murder of George Floyd by former Minneapolis police officer Derek Chauvin. Chauvin’s criminal trial began this week and that Chauvin is being prosecuted at all is in large part due to the brave bystanders who recorded the scene.

Notwithstanding the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has not been recognized by all federal jurisdictions. Federal appellate courts in the First, Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right.

We had hoped that the Tenth Circuit would join this list. Instead, the court stated, “because we ultimately determine that any First Amendment right that Mr. Frasier had to record the officers was not clearly established at the time he did so, we see no reason to risk the possibility of glibly announcing new constitutional rights … that will have no effect whatsoever on the case.”

This statement by the court is surprisingly dismissive given the important role courts play in upholding constitutional rights. Even with the court’s holding that the police officers had qualified immunity against Frasier’s First Amendment claim, if the court declared that the right to record the police, in fact, exists within the Tenth Circuit, this would unequivocally help to protect the millions of Americans who live within the court’s jurisdiction from police misconduct.

But the Tenth Circuit refused to do so, leaving this critical question to another case and another appellate panel.  

All is Not Lost in Colorado

Although the Tenth Circuit refused to recognize that the right to record the police exists as a matter of constitutional law throughout its jurisdiction, it is comforting that the Colorado Legislature passed two statutes in the wake of the Frasier case.

The first law created a statutory right for civilians to record police officers (Colo. Rev. Stat. § 16-3-311). The second created a civil cause of action against police officers who interfere with an individual’s lawful attempt to record an incident involving a police officer, or who destroy, damage, or seize a recording or recording device (Colo. Rev. Stat. § 13-21-128).

Additionally, the Denver Police Department revised its operations manual to prohibit punching a suspect to get drugs out of his mouth (Sec. 116.06(3)(b)), and to explicitly state that civilians have a right to record the police and that officers may not infringe on this right (Sec. 107.04(3)).

Sophia Cope

EFF to Court: Don’t Let Pseudo-IP Thwart Speech, Innovation, and Competition

2 months 1 week ago

The threats to online expression and innovation keep coming. One that’s flown under the radar is a misguided effort to convince the Third Circuit Court of Appeals to allow claims based on the “right of publicity,” (i.e., the right to control the commercial exploitation of your persona) because some people think of this right as a form of “intellectual property.” State law claims are normally barred under Section 230, a law has enabled decades of innovation and online expression. But Section 230 doesn’t apply to “intellectual property” claims, so if publicity rights are intellectual property (“IP”), the theory goes, intermediaries can be sued for any user content that might evoke a person. That interpretation of Section 230 would effectively eviscerate its protections altogether.

Good news: it’s wrong.

Bad news: the court might not see that, which is why EFF, along with group of small tech companies and advocates, filed an amicus brief to help explain the law and the stakes of the case for the internet.

The facts here are pretty ugly. The plaintiff is a reporter who discovered that an image of her caught on a surveillance camera was being used in ads and shared on social media without her permission. She’s suing Facebook, Reddit, Imgur and a porn site for violating her publicity rights, The district court dismissed the case on Section 230 ground, following strong precedent from the Ninth Circuit holding that the IP carveout doesn’t include state law publicity claims. Hepp appealed.

As we explain in our brief, the court should start by looking at Section 230 itself. Generally, if the wording of a law makes sense to a general reader, a court will keep things simple and assume the straightforward meaning. But if the words are unclear or have multiple meanings, the court has to dig deeper. In this case, the term at issue, “intellectual property,” varies widely depending on context. The term didn’t even come into common use until the latter half of the 20th Century, but it’s now used loosely to refer to everything from trade secrets to unfair competition.

Given that ambiguity, the court should look beyond the text of the law and consider Congress’ intent. Within the context of Section 230, construing the term to include publicity rights is simply nonsensical.

Congress passed Section 230 so that new sites and services could grow and thrive without fear that a failure to perfectly manage content that might run afoul of 50 different state laws might lead to crippling liability. Thanks to Section 230, we have an internet that enables new forms of collaboration and cultural production; allows ordinary people to stay informed, organize and build communities in new and unexpected ways; and, especially in a pandemic, helps millions learn, work and serve others. And new platforms and services emerge every day because they can afford to direct their budgets toward innovation, rather than legal fees.

Excluding publicity rights claims from the immunity afforded by Section 230 would put all of that in jeopardy. In California, publicity rights protections apply to virtually anything that evokes a person, and endure for 70 years after the death of that person. In Virginia, a publicity rights violation can result in criminal penalties. Alaska doesn’t recognize a right of publicity at all. Faced with a panoply of standards, email providers, social media platforms, and any site that supports user-generated content will be forced to tailor their sites and procedures to ensure compliance with the most expansive state law, or risk liability and potentially devastating litigation costs.

For all their many flaws, copyrights and patent laws are relatively clear, relatively knowable, and embody a longstanding balance between rightsholders, future creators and inventors, and the public at large. Publicity rights are none of these things. Instead, whatever we call them, they look a lot more like other torts, like privacy violations, that are included within Section 230’s traditional scope.

Ms. Hepp has a good reason to be angry, and we didn’t file our amicus because we are concerned about the effects of an adverse ruling on Facebook in particular, which can doubtless afford any liability it might incur. The problem is everyone else: the smaller entities that cannot afford that risk, or even the costs of defending a lawsuit; and the users who rely on intermediaries to communicate with family, friends and the world, and who will be unable to share content that might include an image, likeness or phrase associated with a person should those intermediaries be saddled with defending against state publicity claims based on their users’ speech.

What is worse, it will help entrench the current dominant players. Section 230 led to the emergence of all kinds of new products and forums but, equally importantly, it has also kept the door open for competitors to follow. Today, social media is dominated by Twitter, Facebook and Youtube, but dissatisfied users can turn to Discord, Parler, Clubhouse, TikTok and Rumble. Dissatisfied Gmail users can turn to Proton, Yahoo!, Riseup and many others. None of these entities, entrenched or emergent, would exist without Section 230.

Hepp’s theory raises that barrier to entry back up, particularly given that intermediaries would face potential liability not only for images and video, but mere text as well. To mitigate that liability risk, any company that relies on ads will be forced to try to screen potentially unlawful content, rewrite their terms of service, and/or require consent forms for any use of anything that might evoke a persona. But even strict terms of service, consent forms, and content filters would not suffice: many services will be swamped by meritless claims or shaken down for nuisance settlements. Tech giants like Facebook and Google might survive this flood of litigation, but nonprofit platforms and startups – like the next competitors to Facebook and Google – would not. And investors who would rather support innovation than lawyers, filtering technologies, and content moderators, will choose not to fund emerging alternative services at all.

Corynne McSherry
2 hours 24 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed