Ring Changed How Police Request Door Camera Footage: What it Means and Doesn’t Mean

3 months 2 weeks ago

Amazon Ring has announced that it will change the way police can request footage from millions of doorbell cameras in communities across the country. Rather than the current system, in which police can send automatic bulk email requests to individual Ring users in an area of interest up to a square half mile, police will now publicly post their requests to Ring’s accompanying Neighbors app. Users of that app will see a “Request for Assistance” on their feed, unless they opt out of seeing such requests, and then Ring customers in the area of interest (still up to a square half mile) can respond by reviewing and providing their footage. 

Because only a portion of Ring users also are Neighbors users, and some of them may opt out of receiving police requests, this new system may  reduce the number of people who receive police requests, though we wonder whether Ring will now push more of its users to register for the app. 

This new model also may increase transparency over how police officers use and abuse the Ring system, especially as to people of color, immigrants, and protesters. Previously, in order to learn about police requests to Ring users, investigative reporters and civil liberties groups had to file public records requests with police departments--which consumed significant time and often yielded little information from recalcitrant agencies. Through this labor-intensive process, EFF revealed that the Los Angeles Police Department targeted Black Lives Matter protests in May and June 2020 with bulk Ring requests for doorbell camera footage that likely included First Amendment protected activities. Now, users will be able to see every digital request a police department has made to residents for Ring footage by scrolling through a department’s public page on the app. 

But making it easier to monitor historical requests can only do so much. It certainly does not address the larger problem with Ring and Neighbors: the network is predicated on perpetuating irrational fear of neighborhood crime, often yielding disproportionate scrutiny against people of color, all for the purposes of selling more cameras. Ring does so through police partnerships, which now encompass 1 in every 10 police departments in the United States. At their core, these partnerships facilitate bulk requests from police officers to Ring customers for their camera footage, built on a growing Ring surveillance network of millions of public-facing cameras. EFF adamantly opposes these Ring-police partnerships and advocates for their dissolution.

Nor does new transparency about bulk officer-to-resident requests through Ring erase the long history of secrecy about these shady partnerships. For example, Amazon has provided free Ring cameras to police, and limited what police were allowed to say about Ring, even including the existence of the partnership. 

Notably, Amazon has moved Ring functionality to its Neighbors app. Neighbors is a problematic technology. Like its peers Nextdoor and Citizen, it encourages its users to report supposedly suspicious people--often resulting in racially biased posts that endanger innocent residents and passersby. 

Ring’s small reforms invite  bigger questions: Why does a customer-focused technology company need to develop and maintain a feature for law enforcement in the first place? Why must Ring and other technology companies continue to offer police free features to facilitate surveillance and the transfer of information from users to the government? 

Here’s some free advice for Ring: Want to make your product less harmful to vulnerable populations? Stop facilitating their surveillance and harassment at the hands of police. 

Matthew Guariglia

Maryland and Montana Pass the Nation’s First Laws Restricting Law Enforcement Access to Genetic Genealogy Databases

3 months 2 weeks ago

Last week, Maryland and Montana passed laws requiring judicial authorization to search consumer DNA databases in criminal investigations. These are welcome and important restrictions on forensic genetic genealogy searching (FGGS)—a law enforcement technique that has become increasingly common and impacts the genetic privacy of millions of Americans.

Consumer personal genetics companies like Ancestry, 23andMe, GEDMatch, and FamilyTreeDNA host the DNA data of millions of Americans. The data users share with consumer DNA databases is extensive and revealing. The genetic profiles stored in those databases are made up of more than half a million single nucleotide polymorphisms (“SNPs”) that span the entirety of the human genome. These profiles not only can reveal family members and distant ancestors, they can divulge a person’s propensity for various diseases like breast cancer or Alzheimer’s and can even predict addiction and drug response. Some researchers have even claimed that human behaviors such as aggression, or ideological beliefs such as politics, can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply. Companies will even create images of what they think a person looks like based just on their genetic data. Claims like these, which are often presented as fact, are dangerous because they can be seized on by law enforcement to target marginalized communities and can lead to people being misidentified for crimes they didn't commit.

Through FGGS, Law enforcement regularly accesses this intensely private and sensitive data. Just like consumers, officers take advantage of the genetics companies’ powerful algorithms to try to identify familial relationships between an unknown forensic sample and existing site users. These familial relationships can then lead law enforcement to possible suspects. However, in using FGGS, officers are rifling through the genetic data of millions of Americans who are not suspects in the investigation and have no connection to the crime whatsoever. This is not how criminal investigations are supposed to work. As we have argued before, the language of the Fourth Amendment, which requires probable cause for every search and particularity for every warrant, precludes dragnet warrantless searches like these. A technique’s usefulness for law enforcement does not outweigh people’s privacy interests in their genetic data.

Up until now, nothing has prevented law enforcement from rifling through the genetic data of millions of unsuspecting and innocent Americans. The new laws in Maryland and Montana should change that.

Here’s What the New Laws Require: Maryland:

Maryland’s law is very broad and covers much more than FGGS. It requires judicial authorization for FGGS and places strict limits on when and under what conditions law enforcement officers may conduct FGGS. For example, FGGS may only be used in cases of rape, murder, felony sexual offenses, and criminal acts that present “a substantial and ongoing threat to public safety or national security.” Before officers can pursue FGGS, they must certify to the court that they have already tried searching existing, state-run criminal DNA databases like CODIS, that they have pursued other reasonable investigative leads, and that those searches have failed to identify anyone. And FGGS may only be used with consumer databases that have provided explicit notice to users about law enforcement searches and sought consent from those users. These meaningful restrictions ensure that FGGS does not become the default first search conducted by law enforcement and limits its use to crimes that society has already determined are the most serious.

The Maryland law regulates other important aspects of genetic investigations as well. For example, it places strict limits on and requires judicial oversight for the covert collection of DNA samples from both potential suspects and their genetic relatives, something we have challenged several times in the courts. This is a necessary protection because officers frequently and secretly collect and search DNA from free people in criminal investigations involving FGGS. We cannot avoid shedding carbon copies of our DNA, and we leave it behind on items in our trash, an envelope we lick to seal, or even the chairs we sit on, making it easy for law enforcement to collect our DNA without our knowledge. We have argued that the Fourth Amendment precludes covert collection, but until courts have a chance to address this issue, statutory protections are an important way to reinforce our constitutional rights.

The new Maryland law also mandates informed consent in writing before officers can collect DNA samples from third parties and precludes covert collection from someone who has refused to provide a sample. It requires destruction of DNA samples and data when an investigation ends. It also requires licensing for labs that conduct DNA sequencing used for FGGS and for individuals who perform genetic genealogy. It creates criminal penalties for violating the statute and a private right of action with liquidated damages so that people can enforce the law through the courts. It requires the governor’s office to report annually and publicly on law enforcement use of FGGS and covert collection. Finally, it states explicitly that criminal defendants may use the technique as well to support their defense (but places similar restrictions on use). All of these requirements will help to rein in the unregulated use of FGGS.


In contrast to Maryland’s 16-page comprehensive statute, Montana’s is only two pages and less clearly drafted. However, it still offers important protections for people identified through FGGS.

Montana’s statute requires a warrant before government entities can use familial DNA or partial match search techniques on either consumer DNA databases or the state’s criminal DNA identification index. 1 The statute defines a “familial DNA search” broadly as a search that uses “specialized software to detect and statistically rank a list of potential candidates in the DNA database who may be a close biological relative to the unknown individual contributing the evidence DNA profile.” This is exactly what consumer genetic genealogy sites like GEDmatch and FamilyTree DNA’s software does. The statute also applies to companies like Ancestry and 23andMe that do their own genotyping in-house, because it covers “lineage testing,” which it defines as “[SNP] genotyping to generate results related to a person's ancestry and genetic predisposition to health-related topics.”

The statute also requires a warrant for other kinds of searches of consumer DNA databases, like when law enforcement is looking for a direct user of the consumer DNA database. Unfortunately, though, the statute includes a carve-out to this warrant requirement if “the consumer whose information is sought previously waived the consumer’s right to privacy,” but does not explain how an individual consumer may waive their privacy rights. There is no carve out for familial searches.

By creating stronger protections for people who are identified through familial searches but who haven’t uploaded their own data, Montana’s statute recognizes an important point that we and others have been making for a few years—you cannot waive your privacy rights in your genetic information when someone else has control over whether your shared DNA ends up in a consumer database.

It is unfortunate, though, that this seems to come at the expense of existing users of consumer genetics services. Montana should have extended warrant protections to everyone whose DNA data ends up in a consumer DNA database. A bright line rule would have been better for privacy and perhaps easier for law enforcement to implement since it is unclear how law enforcement will determine whether someone waived their privacy rights in advance of a search. 

We Need More Legal Restrictions on FGGS

We need more states—and the federal government— to pass restrictions on genetic genealogy searches. Some companies, like Ancestry and 23andMe prevent direct access to their databases and have fought law enforcement demands for data. However, other companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country. By 2018, FGGS had already been used in at least 200 cases. Officers never sought a warrant or any legal process at all in any of those cases because there were no state or federal laws explicitly requiring them to do so. 

While EFF has argued FGG searches are dragnets and should never be allowed—even with a warrant, Montana and Maryland’s laws are still a step in the right direction, especially where, as in Maryland, an outright ban previously failed. Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it or to the unbridled discretion of law enforcement to search it.

  • 1.  The restriction on warrantless familial and partial match searching of government-run criminal DNA databases is particularly welcome. Most states do not explicitly limit these searches (Maryland is an exception and explicitly bans this practice), even though many, including a federal government working group, have questioned their efficacy.
Jennifer Lynch

Security Tips for Online LGBTQ+ Dating

3 months 2 weeks ago

Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+, they can be harmed by oppressive governments, bigoted law enforcement, and individuals with hateful beliefs. So here’s some advice for staying safe while online dating as an LGBTQIA+ person:

Step One: Threat Modeling

The first step is making a personal digital security plan. You should start with looking at your own security situation from a high level. This is often called threat modeling and risk assessment. Simply put, this is taking inventory of the things you want to protect and what adversaries or risks you might be facing. In the context of online dating, your protected assets might include details about your sexuality, gender identity, contacts of friends and family, HIV status, political affiliation, etc. 

Let's say that you want to join a dating app, chat over the app, exchange pictures, meet someone safely, and avoid stalking and harassment. Threat modeling is how you assess what you want to protect and from whom. 

We touch in this post on a few considerations for people in countries where homosexuality is criminalized, which may include targeted harassment by law enforcement. But this guide is by no means comprehensive. Refer to materials by LGBTQ+ organizations in those countries for specific tips on your threat model.

Securely Setting Up Dating Profiles

When making a new dating account, make sure to use a unique email address to register. Often you will need to verify the registration process through that email account, so it’s likely you’ll need to provide a legitimate address. Consider creating an email address strictly for that dating app. Oftentimes there are ways to discover if an email address is associated with an account on a given platform, so using a unique one can prevent others from potentially knowing you’re on that app. Alternatively, you might use a disposable temporary email address service. But if you do so, keep in mind that you won’t be able to access it in the future, such as if you need to recover a locked account. 

The same logic applies to using phone numbers when registering for a new dating account. Consider using a temporary or disposable phone number. While this can be more difficult than using your regular phone number, there are plenty of free and paid virtual telephone services available that offer secondary phone numbers. For example, Google Voice is a service that offers a secondary phone number attached to your normal one, registered through a Google account. If your higher security priority is to abstain from giving data to a privacy-invasive company like Google, a “burner” pay-as-you-go phone service like Mint Mobile is worth checking out. 

When choosing profile photos, be mindful of images that might accidentally give away your location or identity. Even the smallest clues in an image can expose its location. Some people use pictures with relatively empty backgrounds, or taken in places they don’t go to regularly.

Make sure to check out the privacy and security sections in your profile settings menu. You can usually configure how others can find you, whether you’re visible to others, whether location services are on (that is, when an app is able to track your location through your phone), and more. Turn off anything that gives away your location or other information, and later you can selectively decide which features to reactivate, if any. More mobile phone privacy information can be found on this Surveillance Self Defense guide.

Communicating via Phone, Email, or In-App Messaging

Generally speaking, using an end-to-end encrypted messaging service is the best way to go for secure texting. For some options like Signal, or Whatsapp, you may be able to use a secondary phone number to keep your “real” phone number private.

For phone calls, you may want to use a virtual phone service that allows you to screen calls, use secondary phone numbers, block numbers, and more. These aren’t always free, but research can bring up “freemium” versions that give you free access to limited features.

Be wary of messaging features within apps that offer deletion options or disappearing messages, like Snapchat. Many images and messages sent through these apps are never truly deleted, and may still exist on the company’s servers. And even if you send someone a message that self-deletes or notifies you if they take a screenshot, that person can still take a picture of it with another device, bypassing any notifications. Also, Snapchat has a map feature that shows live public posts around the world as they go up. With diligence, someone could determine your location by tracing any public posts you make through this feature.

Sharing Photos

If the person you’re chatting with has earned a bit of your trust and you want to share pictures with them, consider not just what they can see about you in the image itself, as described above, but also what they can learn about you by examining data embedded in the file.

EXIF metadata lives inside an image file and describes the geolocation it was taken, the device it was made with, the date, and more. Although some apps have gotten better at automatically withholding EXIF data from uploaded images, you still should manually remove it from any images you share with others, especially if you send them directly over phone messaging. 

One quick way is to  send the image to yourself on Signal messenger, which automatically strips EXIF data. When you search for your own name in contacts, a feature will come up with “Note to Self” where you have a chat screen to send things to yourself:

Screenshot of Signal's Note to Self feature

Before sharing your photo, you can verify the results by using a tool to view EXIF data on an image file, before and after removing EXIF data.

For some people, it might be valuable to use a watermarking app to add your username or some kind of signature to images. This can verify who you are to others and prevent anyone from using your images to impersonate you. There are many free and mostly-free options in iPhone and Android app stores. Consider a lightweight version that allows you to easily place text on an image and lets you screenshot the result. Keep in mind that watermarking a picture is a quick way to identify yourself, which in itself is a trade-off.

watermark example overlaid on an image of the lgbtq+ pride flag

Sexting Safely

Much of what we’ve already gone over will step up your security when it comes to sexting, but here are some extra precautions:

Seek clearly communicated consent between you and romantic partners about how intimate pictures can be shared or saved. This is great non-technical security at work. If anyone else is in an image you want to share, make sure you have their consent as well. Also, be thoughtful as to whether or not to include your face in any images you share.

As we mentioned above, your location can be determined by public posts you make and Snapchat’s map application.

For video chatting with a partner, consider a service like Jitsi that allows temporary rooms, no registration, and is designed with privacy in mind. Many services are not built with privacy in mind, and require account registration, for example. 

Meeting Someone AFK

Say you’ve taken all the above precautions, someone online has gained your trust, and you want to meet them away-from-keyboard and in real life. Always meet first somewhere public and occupied with other people. Even better, meet in an area more likely to be accepting of LGBTQIA+ people. Tell a friend beforehand all the details about where you’re going, who you are meeting, and a given time that you promise to check back in with them that you’re ok.

If you’re living in one of the 69 countries where homosexuality is illegal and criminalized, make sure to check in with local advocacy groups about your area. Knowing your rights as a citizen will help keep you safe if you’re stopped by law enforcement.

Privacy and Security is a Group Effort

Although the world is often hostile to non-normative expressions of love and identity, your personal security, online and off, is much better supported when you include the help of others that you trust. Keeping each other safe, accountable, and cared for gets easier when you have more people involved. A network is always stronger when every node on it is fortified against potential threats. 

Happy Pride Month—keep each other safe.

Daly Barnett

Facebook's Policy Shift on Politicians Is a Welcome Step

3 months 2 weeks ago

We are happy to see the news that Facebook is putting an end to a policy that has long privileged the speech of politicians over that of ordinary users. The policy change, which was announced on Friday by The Verge, is something that EFF has been pushing for since as early as 2019. 

Back then, Facebook executive Nick Clegg, a former politician himself, famously pondered: "Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be." 

Perhaps Clegg had a point—we’ve long said that companies are ineffective arbiters of what the world says—but that hardly justifies holding politicians to a lower standard than the average person. International standards will consider the speaker, but only as one of many factors. For example, the United Nations’ Rabat Plan of Action outlines a six-part threshold test that takes into account “(1) the social and political context, (2) status of the speaker, (3) intent to incite the audience against a target group, (4) content and form of the speech, (5) extent of its dissemination and (6) likelihood of harm, including imminence.” Facebook’s Oversight Board recently endorsed the Plan, as a framework for assessing the removal of posts that may incite hostility or violence.

Facebook has deviated very far from the Rabat standard thanks, in part, to the policy it is finally repudiating. For example, it has banned elected officials from parties disfavored by the U.S. government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government's list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obligated after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.

So this decision is a good step in the right direction. But Facebook has many steps to go, including finally—and publicly—endorsing and implementing the Santa Clara Principles.

But ultimately, the real problem is that Facebook’s policy choices have so much power in the first place. It’s worth noting that this move coincides with a massive effort to persuade the U.S. Congress to impose new regulations that are likely to entrench Facebook power over free expression in the U.S. and around the world. If users, activists and, yes, politicians want real progress in defending free expression, we must fight for a world where changes in Facebook’s community standards don’t merit headlines at all—because they just don’t matter that much.


Corynne McSherry

Where to See the EFF Staff at RightsCon 2021

3 months 2 weeks ago

Our friends at Access Now are once again hosting RightsCon online next week, June 7-11th. This summit provides an opportunity for human rights experts, technologists, government representatives, and activists to discuss pressing human rights challenges and their potential solutions. This year we will have several EFF staff in attendance and leading sessions throughout this five-day conference.

We hope you have an opportunity to connect with us at the following: 

Monday, June 7th

7:45-8:45 am PDTTaking stock of the Facebook Oversight Board’s first year
Director of International Freedom of Expression, Jillian York

This panel will take stock of the work of the Oversight Board since the announcement of its first members in May 2020. Panelists will critically reflect on the development of the Board over its first year and consider its future evolution. 

9:00-10:00 am PDT RightsCon: Upload filters for copyright enforcement take over the world: connecting struggles in EU, US, and Latin America
Associate Director of Policy and Activism, Katharine Trendacosta

The EU copyright directive risks making upload filters mandatory for large and small platforms to prevent copyright infringements by their users. Its adoption has led to large street protests over the risk that legal expression will be curtailed in the process. Although the new EU rules will only be applied from the summer, similar proposals have already emerged in the US and Latin America, citing the EU copyright directive as a role model.

11:30-12:30pm PDTWhat's past is prologue: safeguarding rights in international efforts to fight cybercrime
Note: This strategy session is limited to 25 participants.
Policy Director for Global Privacy, Katitza Rodriguez

Some efforts by governments to fight cybercrime can fail to respect international human rights law and standards, undermining people’s fundamental rights. At the UN, negotiations on a new cybercrime treaty are set to begin later this year. This would be the first global treaty on cybercrime and has the potential to significantly shape government cooperation on cybercrime and respect for rights.

Wednesday, June 9th

8:30-9:30 am PDTContemplating content moderation in Africa: disinformation and hate speech in focus
Director of International Freedom of Expression, Jillian York.
Note: This community lab is limited to 60 participants.

While social media platforms have been applauded for being a place of self-expression, they are often inattentive to local contexts in many ethnically and linguistically diverse African countries. This panel brings together academics and civil society members to highlight and discuss the obstacles facing Africa when it comes to content moderation.

Thursday, June 10th

5:30-6:30 am PDTDesigning for language accessibility: making usable technologies for non-left to right languages
Designer and Education Manager, Shirin Mori.
Note: This community lab is limited to 60 participants.

The internet was built from the ground up for ASCII characters, leaving billions of speakers of languages that do not use the Latin alphabet underserved. We’d like to introduce a number of distinct problems for non-Left-to-Right (LTR) language users that are prevalent in existing security tools and workflows.

8:00-9:00 am PDT–  “But we're on the same side!”: how tools to fight extremism can harm counterspeech
Director of International Freedom of Expression, Jillian York.

Bad actors coordinate across platforms to spread content that is linked to offline violence but not deemed TVEC by platforms; centralized responses have proved prone to error and too easy to propagate errors across the Internet. Can actual dangerous speech be addressed without encouraging such dangerous centralization?

12:30-1:30 pm PDT– As AR/VR becomes a reality, it needs a human rights framework
Policy Director for Global Privacy, Katitza Rodriguez; Grassroots Advocacy Organizer, Rory Mir; Deputy Executive Director and General Counsel, Kurt Opsahl.
Note: This community lab is limited to 60 participants.

Virtual reality and augmented reality technologies (VR/AR) are rapidly becoming more prevalent to a wider audience. This technology provides the promise to entertain and educate, to connect and enhance our lives, and even to help advocate for our rights. But it also raises the risk of eroding them online.

Friday, June 11th

9:15-10:15 am PDT–  Must-carry? The pros and cons of requiring online services to carry all user speech: litigation strategies and outcomes
Civil Liberties Director, David Greene.
Note: This community lab is limited to 60 participants.

The legal issue of whether online services must carry user speech is a complicated one, with free speech values on both sides, and different results arising from different national legal systems. Digital rights litigators from around the world will meet to map out the legal issue across various national legal systems and discuss ongoing efforts as well as how the issue may be decided doctrinally under our various legal regimes.

In addition to these events, the EFF staff will be attending many other events at RightsCon and we look forward to meeting you there. You can view the full programing, as well as many other useful resources, on the RightsCon site.

Rory Mir

The Safe Connections Act Would Make It Easier to Escape Domestic Violence

3 months 2 weeks ago

We all know that, in the 21st century, it is difficult to lead a life without a cell phone. It is also difficult to change your number—the one all your friends, family, doctors, children’s schools, and so on—have for you. It’s especially difficult to do these things if you are trying to leave an abusive situation where your abuser is in control of your family plan and therefore has access to your phone records. Thankfully, Congress has a bill that will change that.

Take Action

Help Survivors Escape Domestic Abuse

In August 2020, EFF joined with the Clinic to End Tech Abuse and other groups dedicated to protecting survivors of domestic violence to send a letter to Congress, calling them to pass a federal law that creates the right to leave a family mobile phone plan that they share with their abuser.

This January, Senators Brian Schatz, Deb Fischer, Richard Blumenthal, Rick Scott, and Jacky Rosen responded to the letter by introducing The Safe Connections Act (S. 120), which would make it easier for survivors to separate their phone line from a family plan while keeping their own phone number. It would also require the FCC to create rules to protect the privacy of the people seeking this protection. EFF is supportive of this bill.

The bill got bipartisan support and passed unanimously out of the U.S. Senate Committee on Commerce, Science, & Transportation on April 28, 2021. While there is still a long way to go, EFF is pleased to see this bill get past the first critical step. There is little reason that telecommunications carriers, who are already required to make numbers portable when users want to change carriers, cannot replicate such a seamless process when a paying customer wants to move an account within the same carrier.

Our cell phones contain a vast amount of information about our lives, including the calls and texts we make and receive. The account holder of a family plan has access to all of that data, including if someone in the plan is calling a domestic violence hotline. Giving survivors more tools to protect their privacy, leave abusive situations, and get on with their lives are worthy endeavors. The Safe Connections Act provides a framework to serve these ends.

We would prefer a bill that did not require survivors to provide paperwork to “prove” their abuse—for many survivors, providing paperwork about their abuse from a third party is burdensome and traumatic, especially when it is required at the very moment when they are trying to free themselves from their abusers. It also requires the FCC to create new regulations to protect the privacy of people seeking help to leave abusive situations though still needs stronger safeguards and remedies to ensure these protections are effective. EFF will continue to advocate for these improvements as the legislation moves forward.

India McKinney

Van Buren is a Victory Against Overbroad Interpretations of the CFAA, and Protects Security Researchers

3 months 2 weeks ago

The Supreme Court’s Van Buren decision today overturned a dangerous precedent and clarified the notoriously ambiguous meaning of “exceeding authorized access” in the Computer Fraud and Abuse Act, the federal computer crime law that’s been misused to prosecute beneficial and important online activity. 

The decision is a victory for all Internet users, as it affirmed that online services cannot use the CFAA’s criminal provisions to enforce limitations on how or why you use their service, including for purposes such as collecting evidence of discrimination or identifying security vulnerabilities. It also rejected the use of troubling physical-world analogies and legal theories to interpret the law, which in the past have resulted in some of its most dangerous abuses.

The Van Buren decision is especially good news for security researchers, whose work discovering security vulnerabilities is vital to the public interest but often requires accessing computers in ways that contravene terms of service. Under the Department of Justice’s reading of the law, the CFAA allowed criminal charges against individuals for any website terms of service violation. But a majority of the Supreme Court rejected the DOJ’s interpretation. And although the high court did not narrow the CFAA as much as EFF would have liked, leaving open the question of whether the law requires circumvention of a technological access barrier, it provided good language that should help protect researchers, investigative journalists, and others. 

The CFAA makes it a crime to “intentionally access[] a computer without authorization or exceed[] authorized access, and thereby obtain[] . . . information from any protected computer,” but does not define what authorization means for purposes of exceeding authorized access. In Van Buren, a former Georgia police officer was accused of taking money in exchange for looking up a license plate in a law enforcement database. This was a database he was otherwise entitled to access, and Van Buren was charged with exceeding authorized access under the CFAA. The Eleventh Circuit analysis had turned on the computer owner’s unilateral policies regarding use of its networks, allowing private parties to make EULA, TOS, or other use policies criminally enforceable. 

The Supreme Court rightly overturned the Eleventh Circuit, and held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Rather, the statute’s prohibition is limited to someone who “accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.” The Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. If you need to break through a digital gate to get in, entry is a crime, but if you are allowed through an open gateway, it’s not a crime to be inside.

This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA. For example, if you can look at housing ads as a user, it is not a hacking crime to pull them for your bias-in-housing research project, even if the TOS forbids it. Van Buren is really good news for port scanning, for example: so long as the computer is open to the public, you don’t have to worry about the conditions for use to scan the port. 

While the decision was centered around the interpretation of the statute’s text, the Court bolstered its conclusion with the policy concerns raised by the amici, including a brief EFF filed on behalf of computer security researchers and organizations that employ and support them. The Court’s explanation is worth quoting in depth:

If the “exceeds authorized access” clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals. Take the workplace. Employers commonly state that computers and electronic devices can be used only for business purposes. So on the Government’s reading of the statute, an employee who sends a personal e-mail or reads the news using her work computer has violated the CFAA. Or consider the Internet. Many websites, services, and databases …. authorize a user’s access only upon his agreement to follow specified terms of service. If the “exceeds authorized access” clause encompasses violations of circumstance-based access restrictions on employers’ computers, it is difficult to see why it would not also encompass violations of such restrictions on website providers’ computers. And indeed, numerous amici explain why the Government’s reading [would]  criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook.

This analysis shows the Court recognized the tremendous danger of an overly broad CFAA, and explicitly rejected the Government’s arguments for retaining wide powers, tempered only by their prosecutorial discretion. 

Left Unresolved: Whether CFAA Violations Require Technical Access Limitations 

The Court’s decision was limited in one important respect. In a footnote, the Court left as an open question if the enforceable access restriction meant only “technological (or ‘code-based’) limitations on access, or instead also looks to limits contained in contracts or policies,” meaning that the opinion neither adopted nor rejected either path. EFF has argued in courts and legislative reform efforts for many years that it’s not a computer hacking crime without hacking through a technological defense. 

This footnote is a bit odd, as the bulk of the majority opinion seems to point toward the law requiring someone to defeat technological limitations on access, and throwing shade at criminalizing TOS violations. In most cases, the scope of your access once on a computer is defined by technology, such as an access control list or a requirement to reenter a password. Professor Orin Kerr suggested that this may have been a necessary limitation to build the six justice majority. 

Later in the Van Buren opinion, the Court rejected a Government argument that a rule against “using a confidential database for a non-law-enforcement purpose” should be treated as a criminally enforceable access restriction, different from “using information from the database for a non-law-enforcement purpose” (emphasis original). This makes sense under the “gates-up-or-down” approach adopted by the Court. Together with the policy issues the Court acknowledged regarding enforcing terms of service quoted above, this helps us understand the limitation footnote, suggesting cleverly writing a TOS will not easily turn a conditional rule on why you can access, or what you can do with information later, into a criminally enforceable access restriction.

Nevertheless, leaving the question open means that we will have to litigate whether and under what circumstance a contract or written policy can amount to an access restriction in the years to come. For example, in Facebook v. Power Ventures, the Ninth Circuit found that a cease and desist letter removing authorization was sufficient to create a CFAA violation for later access, even though a violation of the Facebook terms alone was not. Service providers will likely argue that this is the sort of non-technical access restriction that was left unresolved by Van Buren. 

Court’s Narrow CFAA Interpretation Should Help Security Researchers

Even though the majority opinion left this important CFAA question unresolved, the decision still offers plenty of language that will be helpful for later cases on the scope of the statute. That’s because the Van Buren majority’s focus on the CFAA’s technical definitions, and the types of computer access that the law restricts, should provide guidance to lower courts that narrow the law’s reach. 

This is a win because broad CFAA interpretations have in the past often deterred or chilled important security research and investigative journalism. The CFAA put these activities in legal jeopardy, in part, because courts often struggle with using non-digital legal concepts and physical analogies to interpret the statute. Indeed, one of the principle disagreements between the Van Buren majority and dissent is whether the CFAA should be interpreted based on physical property law doctrines, such as trespass and theft.

The majority opinion ruled that, in principle, computer access is different from the physical world precisely because the CFAA contains so many technical terms and definitions. “When interpreting statutes, courts take note of terms that carry ‘technical meaning[s],’” the majority wrote. 

The rule is particularly true for the CFAA because it focuses on malicious computer use and intrusions, the majority wrote. For example, the term “access” in the context of computer use has its own specific, well established meaning: “In the computing context, ‘access’ references the act of entering a computer ‘system itself’ or a particular ‘part of a computer system,’ such as files, folders, or databases.” Based on that definition, the CFAA’s “exceeding authorized access” restriction should be limited to prohibiting “the act of entering a part of the system to which a computer user lacks access privileges.” 

The majority also recognized that the portions of the CFAA that define damage and loss are premised on harm to computer files and data, rather than general non-digital harm such as trespassing on another person’s property: “The statutory definitions of ‘damage’ and ‘loss’ thus focus on technological harms—such as the corruption of files—of the type unauthorized users cause to computer systems and data,” the Court wrote. This is important because loss and damage are prerequisites to civil CFAA claims, and the ability of private entities to enforce the CFAA has been a threat that deters security research when companies might rather their vulnerabilities remain unknown to the public. 

Because the CFAA’s definitions of loss and damages focus on harm to computer files, systems, or data, the majority wrote that they “are ill fitted, however, to remediating ‘misuse’ of sensitive information that employees may permissibly access using their computers.”

The Supreme Court’s Van Buren decision rightly limits the CFAA’s prohibition on “exceeding authorized access” to prohibiting someone from accessing particular computer files, services, or other parts of the computer that are otherwise off-limits to them. And the Court’s overturning the Eleventh Circuit decision that permitted CFAA liability based on someone violating a website’s terms of service or an employers’ computer use restrictions ensures that lots of important, legitimate computer use is not a crime. 

But there is still more work to be done to ensure that computer crime laws are not misused against researchers, journalists, activists, and everyday internet users. As longtime advocates against overbroad interpretations of the CFAA, EFF will continue to lead efforts to push courts and lawmakers to further narrow the CFAA and similar state computer crime laws so they can no longer be misused.

Related Cases: Van Buren v. United States
Aaron Mackey

California Has $7 Billion for Broadband. Sacramento May Just Sit On It

3 months 2 weeks ago

It’s hard to believe that when Governor Newsom identifies a total of $7 billion for California’s legislature to spend on broadband access—coming from a mix of state surplus dollars and federal rescue money to invest in broadband infrastructure—that the legislature would do nothing.

It is hard to believe that, when handed an amount that would finance giving every single Californian a fiber connection to the Internet over the next five years; would allow the state to address an urgent broadband crisis worsened by the pandemic; and gives us a way to start ending the digital divide now, that the legislature would rather waste time we can’t afford to think it over.

But that is exactly what California’s legislature has proposed this week. Can you believe it?



Tucked away on page 12 of this 153-page budget document from the legislature this week is the following plan for Governor Newsom’s proposal to help connect every Californian to 21st-century access:

Broadband. Appropriates $7 billion over a multi-year period for broadband infrastructure and improved access to broadband services throughout the state. Details will continue to be worked out through three party negotiations. Administrative flexibilities will enable the appropriated funds to be accelerated to ensure they are available as needed to fund the expansion and improvements.

What this says is that the legislature wants to approve $7 billion for broadband infrastructure but does not want to authorize the governor to carry out his proposal any time soon.

There’s no excuse for this. Lawmakers have been given a lot of detail on this proposal, and ask anyone in the public and they would say we need action right now. This cannot be what passes in Sacramento next week as part of establishing California’s budget. At the very least, the legislature needs to give the Governor the clear authority to begin the planning process of deploying public fiber infrastructure to all Californians. This is a long process, which requires feasibility studies, environmental assessments, contracting with construction crews, and setting up purchases of materials. All of this takes months of time to process before any construction can even start and delaying even this first basic step pushes back the date we end the digital divide in California.

Wasting Time Risks Federal Money and Will Perpetuate the Digital Divide

Federal rescue dollars must be spent quickly, or they will be rescinded back to the federal government. Those are explicit rules from Congress and the Biden Administration as part of the rescue funds that were issued these last few months. Right now, there is a global rush for fiber broadband deployment that is putting a lot of pressure on manufacturers and workforce that build fiber-optic wires. In other words, more and more of the world is catching on to what EFF stated years ago, which is 21st century broadband access is built on fiber optics. Each day California sits out deploying this infrastructure puts us further behind the queue in demand and further delays actual construction.

Therefore, if Sacramento does not immediately authorize at least the planning phase of building out a statewide middle mile open-access fiber network—along with empowering local governments, non-profits, and cooperatives to draft their own fiber plans to deploy last mile connectivity—then we risk losing that valuable federal money. The state has a real opportunity, but only if it acts now, not months from now. California even has a chance to jump the line ahead of the rest of the country as Congress continues to debate about its own broadband infrastructure plan.

For the state that has made famous the little girls doing homework in fast-food parking lots because they lacked affordable robust internet access at home, it is irresponsible to look at $7 billion and not start the process to solve the problem. That’s exactly what will happen if the California legislature doesn’t hear from you.

Call your Assemblymember and Senator now to demand they approve Governor Newsom’s broadband plan next week to fix the digital divide now. This is the time to act on ending the digital divide, not continue talking about it.



Ernesto Falcon

Organizing in the Public Interest: MusicBrainz

3 months 2 weeks ago

This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our first two parts or our introduction.

Last time, we saw how much of the early internet’s content was created by its users—and subsequently purchased by tech companies. By capturing and monopolizing this early data, these companies were able to monetize and scale this work faster than the network of volunteers that first created it for use by everybody. It’s a pattern that has happened many times in the network’s history: call it the enclosure of the digital commons. Despite this familiar story, the older public interest internet has continued to survive side-by-side with the tech giants it spawned: unlikely and unwilling to pull in the big investment dollars that could lead to accelerated growth, but also tough enough to persist in its own ecosystem. Some of these projects you’ve heard of—Wikipedia, or the GNU free software project, for instance. Some, because they fill smaller niches and aren’t visible to the average Internet user, are less well-known. The public interest internet fills the spaces between tech giants like dark matter; invisibly holding the whole digital universe together.

Sometimes, the story of a project’s switch to the commercial model is better known than its continuing existence in the public interest space. The notorious example in our third post was the commercialization of the publicly-built CD Database (CDDB): when a commercial offshoot of this free, user-built database, Gracenote, locked down access, forks like freedb and gnudb continued to offer the service free to its audience of participating CD users.

Gracenote’s co-founder, Steve Scherf, claimed that without commercial investment, CDDB’s free alternatives were doomed to “stagnation”. While alternatives like gnudb have survived, it’s hard to argue that either freedb or gnudb have innovated beyond their original goal of providing and collecting CD track listings. Then again, that’s exactly what they set out to do, and they’ve done it admirably for decades since.

But can innovation and growth take place within the public interest internet? CDDB’s commercialization parlayed its initial market into a variety of other music-based offerings. Their development of these products led to them being purchased, at various points, by AV manufacturer Escient, Sony, Tribune Media, and most recently, Nielsen. Each sale made money for its investors. Can a free alternative likewise build on its beginnings, instead of just preserving them for its original users?

MusicBrainz, a Community-Driven Alternative to Gracenote

Among the CDDB users who were thrown by its switch to a closed system in the 1990s, was Robert Kaye. Kaye was a music lover and, at the time, a coder working on one of the earliest MP3 encoders and players at Xing. Now he and a small staff work full-time on MusicBrainz, a community-driven alternative to Gracenote. (Disclosure: EFF special advisor Cory Doctorow is on the board of MetaBrainz, the non-profit that oversees MusicBrainz).

“We were using CDDB in our service,” he told me from his home in Barcelona. “Then one day, we received a notice that said you guys need to show our [Escient, CDDB’s first commercial owner] logo when a CD is looked up. This immediately screwed over blind users who were using a text interface of another open source CD player that couldn’t comply with the requirement. And it pissed me off because I’d typed in a hundred or so CDs into that database… so that was my impetus to start the CD index, which was the precursor to MusicBrainz.”

Over two decades after the user rebellion that created it, MusicBrainz continues to tick along

MusicBrainz has continued ever since to offer a CDDB-compatible CD metadata database, free for anyone to use. The bulk of its user-contributed data has been put into the public domain, and supplementary data—such as extra tags added by volunteers—is provided under a non-commercial, attribution license. 

Over time, MusicBrainz has expanded by creating other publicly available, free-to-use databases of music data, often as a fallback for when other projects commercialize and lock down. For instance, Audioscrobbler was an independent system that collected information on what music you’ve listened to (no matter on what platform you heard it), to learn and provide recommendations based on its users’ contributions, but under your control. It was merged into Last.fm, an early Spotify-like streaming service, which was then sold to CBS. When CBS seemed to be neglecting the “scrobbling” community, MusicBrainz created ListenBrainz, which re-implemented features that had been lost over time. The plan, says Kaye, is to create a similarly independent recommendation system. 

While the new giants of Internet music—Spotify, Apple Music, Amazon—have been building closed machine-learning models to data-mine their users, and their musical interests, MusicBrainz has been working in the open with Barcelona's Pompeu Fabra University to derive new metadata from the MusicBrainz communities’ contributions. Automatic deductions of genre, mood, beats-per-minute and other information are added to the AcousticBrainz database for everyone to use. These algorithms learn from their contributors’ corrections, and the fixes they provide are added to the commonwealth of public data for everyone to benefit from.

MusicBrainz’ aspirations sound in synchrony with the early hopes of the Internet, and after twenty years, they appear to have proven the Internet can support and expand a long-term public good, as opposed to a proprietary, venture capital-driven growth model. But what’s to stop the organization from going the same way as those other projects with their lofty goals? Kaye works full-time on MusicBrainz along with eight other employees: what’s to say that they’re not exclusively profiteering from the wider unpaid community in the same way as larger companies like Google benefit from their users’ contributions?

MusicBrainz has some good old-fashioned pre-Internet institutional protections. It is managed as a 501(c) non-profit, the MetaBrainz Foundation, which places some theoretical constraints on how it might be bought out. Another old Internet value is radical transparency, and the organization has that in spades. All of its financial transactions, from profit and loss sheets to employment costs, to its server outlay and board meeting notes are published online.

Another factor, says Kaye, is keeping a clear delineation between the work done by MusicBrainz’s paid staff and the work of the MusicBrainz volunteer community. “My team should work on the things that aren’t fun to work on. The volunteers work on the fun things,” he says. When you're running a large web service built on the contributions of a community, there’s no end of volunteers for interesting projects, but, as Kaye notes, “there's an awful lot of things that are simply not fun, right? Our team is focused on doing these things.” It helps that MetaBrainz, the foundation, hires almost exclusively from long-term MusicBrainz community members.

Perhaps MusicBrainz’s biggest defense against its own decline is the software (and data) licenses it uses for its databases and services. In the event of the organizations’ separation from the desires of its community, all its composition and output—its digital assets, the institutional history—are laid out so that the community can clone its structure, and create another, near-identical, institution closer to its needs. The code is open source; the data is free to use; the radical transparency of the financial structures means that the organization itself can be reconstructed from scratch if need be.

Such forks are painful. Anyone who has recently watched the volunteer staff and community of Freenode, the distributed Internet Relay Chat (IRC) network, part ways with the network’s owner and start again at Libera.chat, will have seen this. Forks can be divisive in a community, and can be reputationally devastating to those who are abandoned by the community they claimed to lead and represent. MusicBrainz staff’s livelihood depends on its users in a way that even the most commercially sensitive corporation does not. 

It’s unlikely that a company would place its future viability so directly in the hands of its users. But it’s this self-imposed sword of Damocles hanging over Rob Kaye and his staff’s heads that fuels the communities’ trust in their intentions.

Where Does the Money Come From?

Open licenses, however, can also make it harder for projects to gather funding to persist. Where does MusicBrainz' money come from? If anyone can use their database for free, why don’t all their potential revenue sources do just that, free-riding off the community without ever paying back? Why doesn’t a commercial company reproduce what MusicBrainz does, using the same resources that a community would use to fork the project?

MusicBrainz’s open finances show that, despite those generous licenses, they’re doing fine. The project’s transparency lets us see that it brought in around $400K in revenue in 2020, and had $400K in costs (it experienced a slight loss, but other years have been profitable enough to make this a minor blip). The revenue comes as a combination of small donors and larger sponsors, including giants like Google, who use MusicBrainz’ data and pay for a support contract.

Given that those sponsors could free-ride, how does Kaye get them to pay? He has some unorthodox strategies (most famously, sending a cake to Amazon to get them to honor a three-year-old invoice), but the most common reason seems to be that an open database maintainer that is responsive to a wider community is also easier for commercial concerns to interface with, both technically and contractually. Technologists building out a music tool or service turn to MusicBrainz for the same reason as they might pick an open source project: it’s just easier to slot it into their system without having to jump through authentication hoops or begin negotiations with a sales team. Then, when a company forms around that initial hack, its executives eventually realize that they now have a real dependency on a project with whom they have no contractual or financial relationship. A support contract means that they have someone to call up if it goes down; a financial relationship means that it’s less likely to disappear tomorrow.

If Sony had used MusicBrainz’ data, they would have been able to carry on regardless

Again, commercial alternatives may make the same offer, but while a public interest non-profit like MusicBrainz might vanish if it fails its community, or simply runs out of money, those other private companies may well have other reasons to exit their commitments with their customers. When Sony bought Gracenote, it was presumably partly so that they could support their products that used Gracenote’s databases. After Sony sold Gracenote, they ended up terminating their own use of the databases. Sony announced to their valued customers in 2019 that Sony Blu-Ray and Home Theater products would no longer have CD and DVD recognition features. The same thing happened to Sony’s mobile Music app in 2020, which stopped being able to recognize CDs when it was cut off from Gracenote’s service. We can have no insight into these closed, commercial deals, but we can presume that Sony and Gracenote’s new owner could not come to an amicable agreement. 

By contrast, if Sony had used MusicBrainz’ data, they would have been able to carry on regardless. They’d be assured that no competitor would buy out MusicBrainz from under them, or lock their products out of an advertised feature. And even if MusicBrainz the non-profit died, there would be a much better chance that an API-compatible alternative would spring up from the ashes. If it was that important, Sony could have supported the community directly. As it is, Sony paid $260 million for Gracenote. For their CD services, at least, they could have had a more stable service deal with MusicBrainz for $1500 a month.

Over two decades after the user rebellion that created it, MusicBrainz continues to tick along. Its staff is drawn from music fans around the world, and meets up every year with a conference paid for by the MusicBrainz Foundation. Its contributors know that they can always depend on its data staying free; its paying customers know that they can always depend on its data being usable in their products. MusicBrainz staff can be assured that they won’t be bought up by big tech, and they can see the budget that they have to work with.

It’s not perfect. A transparent non-profit that aspires to internet values can be as flawed as any other. MusicBrainz suffered a reputational hit last year when personal data leaked from its website, for instance. But by continuing to exist, even with such mistakes, and despite multiple economic downturns, it demonstrates that a non-profit, dedicated to the public interest, can thrive without stagnating, or selling its users out.

But, but, but. While it’s good to know public interest services are successful in niche territories like music recognition, what about the parts of the digital world that really seem to need a more democratic, decentralized alternative—and yet notoriously lack them? Sites like Facebook, Twitter, and Google have not only built their empires from others’ data, they have locked their customers in, apparently with no escape. Could an alternative, public interest social network be possible? And what would that look like?

We'll cover these in a later part of our series. (For a sneak preview, check out the recorded discussions at “Reimagining the Internet”, from our friends at the Knight First Amendment Institute at Columbia University and the Initiative on Digital Public Infrastructure at the University of Massachusetts, Amherst, which explore in-depth many of the topics we’ve discussed here.)

Danny O'Brien

If Not Overturned, a Bad Copyright Decision Will Lead Many Americans to Lose Internet Access

3 months 2 weeks ago

This post was co-written by EFF Legal Intern Lara Ellenberg

In going after internet service providers (ISPs) for the actions of just a few of their users, Sony Music, other major record labels, and music publishing companies have found a way to cut people off of the internet based on mere accusations of copyright infringement. When these music companies sued Cox Communications, an ISP, the court got the law wrong. It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital internet access as ISPs start to cut off more and more customers to avoid massive damages.

EFF, together with the Center for Democracy & Technology, the American Library Association, the Association of College and Research Libraries, the Association of Research Libraries, and Public Knowledge filed an amicus brief this week urging the U.S. Court of Appeals for the Fourth Circuit to protect internet subscribers’ access to essential internet services by overturning the district court’s decision.

The district court agreed with Sony that Cox is responsible when its subscribers—home and business internet users—infringe the copyright in music recordings by sharing them on peer-to-peer networks. It effectively found that Cox didn’t terminate accounts of supposedly infringing subscribers aggressively enough. An earlier lawsuit found that Cox wasn’t protected by the Digital Millennium Copyright Act’s (DMCA) safe harbor provisions that protect certain internet intermediaries, including ISPs, if they comply with the DMCA’s requirements. One of those requirements is implementing a policy of terminating “subscribers and account holders … who are repeat infringers” in “appropriate circumstances.” The court ruled in that earlier case that Cox didn’t terminate enough customers who had been accused of infringement by the music companies.

In this case, the same court found that Cox was on the hook for the copyright infringement of its customers and upheld the jury verdict of $1 billion in damages—by far the largest amount ever awarded in a copyright case.

The District Court Got the Law Wrong

When an ISP isn’t protected by the DMCA’s safe harbor provision, it can sometimes be held responsible for copyright infringement by its users under “secondary liability” doctrines. The district court found Cox liable under both varieties of secondary liability—contributory infringement and vicarious liability—but misapplied both of them, with potentially disastrous consequences.

An ISP can be contributorily liable if it knew that a customer infringed on someone else’s copyright but didn’t take “simple measures” available to it to stop further infringement. Judge O’Grady’s jury instructions wrongly implied that because Cox didn’t terminate infringing users’ accounts, it failed to take “simple measures.” But the law doesn’t require ISPs to terminate accounts to avoid liability. The district court improperly imported a termination requirement from the DMCA’s safe harbor provision (which was already knocked out earlier in the case). In fact, the steps Cox took short of termination actually stopped most copyright infringement—a fact the district court simply ignored.

The district court also got it wrong on vicarious liability. Vicarious liability comes from the common law of agency. It holds that people who are a step removed from copyright infringement (the “principal,” for example, a flea market operator) can be held liable for the copyright infringement of its “agent” (for example, someone who sells bootleg DVDs at that flea market), when the principal had the “right and ability to supervise” the agent. In this case, the court decided that because Cox could terminate accounts accused of copyright infringement, it had the ability to supervise those accounts. But that’s not how other courts have ruled. For example, the Ninth Circuit decided in 2019 that Zillow was not responsible when some of its users uploaded copyrighted photos to real estate listings, even though Zillow could have terminated those users’ accounts. In reality, ISPs don’t supervise the Internet activity of their users. That would require a level of surveillance and control that users won’t tolerate, and that EFF fights against every day.

The consequence of getting the law wrong on secondary liability here, combined with the $1 billion damage award, is that ISPs will terminate accounts more frequently to avoid massive damages, and cut many more people off from the internet than is necessary to actually address copyright infringement.

The District Court’s Decision Violates Due Process and Harms All Internet Users

Not only did the decision get the law on secondary liability wrong, it also offends basic ideas of due process. In a different context, the Supreme Court decided that civil damages can violate the Constitution’s due process requirement when the amount is excessive, especially when it fails to consider the public interests at stake. In the case against Cox, the district court ignored both the fact that a $1 billion damages award is excessive, and that its decision will cause ISPs to terminate accounts more readily and, in the process, cut off many more people from the internet than necessary.

Having robust internet access is an important public interest, but when ISPs start over-enforcing to avoid having to pay billion-dollar damages awards, that access is threatened. Millions of internet users rely on shared accounts, for example at home, in libraries, or at work. If ISPs begin to terminate accounts more aggressively, the impact will be felt disproportionately by the many users who have done nothing wrong but only happen to be using the same internet connection as someone who was flagged for copyright infringement.

More than a year after the start of the COVID-19 pandemic, it's more obvious than ever that internet access is essential for work, education, social activities, healthcare, and much more. If the district court’s decision isn’t overturned, many more people will lose access in a time when no one can afford not to use the internet. That harm will be especially felt by people of color, poorer people, women, and those living in rural areas—all of whom rely disproportionately on shared or public internet accounts. And since millions of Americans have access to just a single broadband provider, losing access to a (shared) internet account essentially means losing internet access altogether. This loss of broadband access because of stepped-up termination will also worsen the racial and economic digital divide. This is not just unfair to internet users who have done nothing wrong, but also overly harsh in the case of most copyright infringers. Being effectively cut off from society when an ISP terminates your account is excessive, given the actual costs of non-commercial copyright infringement to large corporations like Sony Music.

It's clear that Judge O’Grady misunderstood the impact of losing Internet access. In a hearing on Cox’s earlier infringement case in 2015, he called concerns about losing access “completely hysterical,” and compared them to “my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework.” Of course, this wasn’t a valid comparison in 2015 and it rightly sounds absurd today. That’s why, as the case comes before the Fourth Circuit, we’re asking the court to get the law right and center the importance of preserving internet access in its decision.

Mitch Stoltz

Supreme Court Overturns Overbroad Interpretation of CFAA, Protecting Security Researchers and Everyday Users

3 months 2 weeks ago

EFF has long fought to reform vague, dangerous computer crime laws like the CFAA. We're gratified that the Supreme Court today acknowledged that overbroad application of the CFAA risks turning nearly any user of the Internet into a criminal based on arbitrary terms of service. We remember the tragic and unjust results of the CFAA's misuse, such as the death of Aaron Swartz, and we will continue to fight to ensure that computer crime laws no longer chill security research, journalism, and other novel and interoperable uses of technology that ultimately benefit all of us.

EFF filed briefs both encouraging the Court to take today's case and urging it to make clear that violating terms of service is not a crime under the CFAA. In the first, filed alongside the Center for Democracy and Technology and New America’s Open Technology Institute, we argued that Congress intended to outlaw computer break-ins that disrupted or destroyed computer functionality, not anything that the service provider simply didn’t want to have happen. In the second, filed on behalf of computer security researchers and organizations that employ and support them, we explained that the broad interpretation of the CFAA puts computer security researchers at legal risk for engaging in socially beneficial security testing through standard security research practices, such as accessing publicly available data in a manner beneficial to the public yet prohibited by the owner of the data. 

Today's win is an important victory for users everywhere. The Court rightly held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Thus, “an individual ‘exceeds authorized access’ when he accesses a computer with authorization but then obtains information located in particular areas of the computer— such as files, folders, or databases—that are off limits to him.” Rejecting the Government’s reading allowing CFAA charges for any website terms of service violation, the Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA.

Read our detailed analysis of the decision here

Related Cases: Van Buren v. United States
Andrew Crocker

The EU Commission's Refuses to Let Go of Filters

3 months 2 weeks ago

The EU copyright directive has caused controversy than any other proposal in recent EU history - and for good reason. In abandoning traditional legal mechanisms to tackle copyright infringement online, Article 17 (formerly Article 13) of the directive introduced a new liability regime for online platforms, supposedly in order to support creative industries, that will have disastrous consequences for users. In a nutshell: To avoid being held responsible for illegal content on their services, online platforms must act as copyright cops, bending over backwards to ensure infringing content is not available on their platforms. As a practical matter (as EFF and other user rights advocates have repeatedly explained) this means Article 17 is a filtering mandate.

But all was not lost - the EU Commission had an opportunity to stand up for users and independent creators by mitigating Article 17's threat to free expression. Unfortunately, it has chosen instead to stand up for a small but powerful group of copyright maximalists. 

The EU Commission's Guidance Document: Civil Society Concerns Take a Backseat

EU "Directives" are not automatically applicable laws. Once a directive is passed, EU member states must “transpose” them into national law. These transpositions are now the center of the fight against copyright upload filters. In several meetings of an EU Commission's Stakeholder Dialogue and through consultations developing guidelines for the application of Article 17 (which must be implemented in national laws by June 7, 2021) EFF and other civil society groups stressed that users' rights to free speech are not negotiable and must apply when they upload content, not during a later complaint stage.

The first draft of the guidance document seemed to recognize those concerns and prioritize user rights. But the final result, issued today, is disappointing. On the plus side, the EU Commission stresses that Article 17 does not mandate the use of specific technology to demonstrate "best efforts" to ensure users don't improperly upload copyright-protected content on platforms. However, the guidance document failed to state clearly that mandated upload filters undermine the fundamental rights protection of users. The EU Commission differentiates "manifestly" infringing uploads from other user uploads, but stresses the importance of rightsholders' blocking instructions, and the need to ensure they do not suffer "economic harm." And rather than focusing on how to ensure legitimate uses such as quotations or parodies, the Commission advises that platforms must give heightened attention to "earmarked" content. As a practical matter, that "heightened attention" is likely to require using filters to prevent users from uploading such content.

We appreciate that digital rights organizations had a seat at the stakeholder dialogue-table, even though outnumbered by rightsholders from the music and film industries and representatives of big tech companies. And the guidance document contains a number of EFF suggestions for implementation, such as to clarify that specific technological solutions are not mandated, to ensure that smaller platforms have a lower standard of "best efforts", and to respect data protection law when interpreting Article 17. However, on the most crucial element - the risk of over-blocking of legitimate user content - the Commission simply describes "current market practices," including the use of content recognition technologies that inevitably over-block. Once again, user rights and exceptions take a backseat.

This battle to protect freedom of expression is far from over. Guidance documents are non-binding and the EU Court of Justice will have the last say on whether Article 17 will lead to censorship and limit freedom of expression rights. Until then, national governments do not have a discretion to transpose the requirements under Article 17 as they see fit, but an obligation to use the legislative leeway available to implement them in line with fundamental rights.

Christoph Schmon
2 hours 47 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed