Police Robots Are Not a Selfie Opportunity, They’re a Privacy Disaster Waiting to Happen

2 months ago

The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly. 

Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you. 

The Orwellian menace of snitch robots might not be immediately apparent. Robots are fun. They dance. You can take selfies with them. This is by design. Both police departments and the companies that sell these robots know that their greatest contributions aren’t just surveillance, but also goodwill. In one brochure Knightscope sent to University of California-Hastings, a law school in the center of San Francisco, the company advertises their robot’s activity in a Los Angeles shopping district called The Bloc. It’s unclear if the robot stopped any robberies, but it did garner over 100,000 social media impressions and 426 comments. Knightscope claims the robot’s 193 million overall media impressions was worth over $5.8 million. The Bloc held a naming contest for the robot, and said it has a “cool factor” missing from traditional beat cops and security guards.

The Bloc/Knighscope promotional material released via public records request by UC-Hastings

As of February 2020, Knighscope had around 100 robots deployed 24/7 throughout the United States. In how many of these communities did neighbors or community members get a say as to whether or not they approved of the deployment of these robots?

But in this era of long-overdue conversations about the role of policing in our society—and in which city after city is reclaiming privacy by restricting police surveillance technologies—these robots are just a more playful way to normalize the panopticon of our lives.

Police Robots Are Surveillance

Knightscope’s robots need cameras to navigate and traverse the terrain, but that’s not all their sensors are doing. According to the proposal that the police department of Huntington Park, California, sent to the mayor and city council, these robots are equipped with many infrared cameras capable of reading license plates. They also have wireless technology “capable of identifying smartphones within its range down to the MAC and IP addresses.” 

The next time you’re at a protest and are relieved to see a robot rather than a baton-wielding officer, know that that robot may be using the IP address of your phone to identify your participation. This makes protesters vulnerable to reprisal from police and thus chills future exercise of constitutional rights. "When a device emitting a Wi-Fi signal passes within a nearly 500 foot radius of a robot,” the company explains on its blog, “actionable intelligence is captured from that device including information such as: where, when, distance between the robot and device, the duration the device was in the area and how many other times it was detected on site recently."

In Spring 2019, the company also announced it was developing face recognition so that robots would be able to “detect, analyze and compare faces.” EFF has long proposed a complete ban on police use of face recognition technology. 

Who Gets Reprimanded When a Police Robot Makes a Bad Decision? 

Knightscope’s marketing materials and media reporting suggest the technology can effectively recognize “suspicious” packages, vehicles, and people. 

But when a robot is scanning a crowd for someone or something suspicious, what is it actually looking for? It’s unclear what the company means. The decision to characterize certain actions and attributes as “suspicious” has to be made by someone. If robots are designed to think people wearing hoods are suspicious, they may target youth of color. If robots are programmed to zero in on people moving quickly, they may harass a jogger, or a pedestrian on a rainy day. If the machine has purportedly been taught to identify criminals by looking at pictures of mugshots, then you have an even bigger problem. Racism in the criminal justice system has all but assured that any machine learning program taught to see “criminals” based on crime data will inevitably see people of color as suspicious

A robot’s machine learning and so-called suspicious behavior detection will lead to racial profiling and other unfounded harrassement. This begs the question: Who gets reprimanded if a robot improperly harrasses an innocent person, or calls the police on them? Does the robot? The people who train or maintain the robot? When state violence is unleashed on a person because a robot falsely flagged them as suspicious, “changing the programming” of the robot and then sending it back onto the street will be little solace for a victim hoping that it won’t happen again. And when programming errors cause harm, who will review changes to make sure they can address the real problem?"

These are all important questions to ask yourselves, and your police and elected officials, before taking a selfie with a rolling surveillance robot. 

Matthew Guariglia

Oakland Privacy and the People of Vallejo Prevail in the Fight For Surveillance Accountability

2 months ago

Just as the 2020 holiday season was beginning in earnest, Solano Superior Court Judge Bradley Nelson upheld the gift of surveillance accountability that the California State legislature had provided state residents when they passed 2015's Senate Bill 741 (Cal. Govt. Code § 53166). Judge Bradley's order brought positive closure to a battle that began last March when Electronic Frontier Alliance member Oakland Privacy notified the Vallejo City Council, and Mayor, that their police department’s proposal to acquire a Cell Site Simulator (CSS) violated California state law.

Introduced by then state-senator Jerry Hill, SB 741 requires an open and transparent process before a local government agency in California may acquire CSS technology. EFF explained this in our own letter to the Vallejo Mayor and City Council days after the illegal purchase had been approved. Specifically, the law requires an agency to write, and publish online for public review, a policy that ensures "the collection, use, maintenance, sharing, and dissemination of information gathered through the use of cellular communications interception technology complies with all applicable law and is consistent with respect for an individual's privacy and civil liberties."

Despite notice from Oakland Privacy that the proposal violated SB 741, the Vallejo City Council on March 24, 2020, authorized their police department to purchase CSS technology from KeyW Corporation. Meanwhile, the City and the nation were adapting to shelter in place protocols intended to suppress the spread of COVID-19, which limited public participation in Vallejo’s CSS proposal.

In his ruling, Solano County Superior Court Judge Bradley Nelson reasoned:

“Respondent had a duty to obey [SB 741] by passing a resolution or ordinance specifically approving a particular policy governing the use of the [CSS] device it purchased. Respondent breached that duty by simply delegating creation of that privacy policy to its police department without an opportunity for public comment on the policy before it was adopted. Because any such policy's personal purpose is to safeguard, within acceptable limitations, the privacy and civil liberties of the members of the public whose cellular communications are intercepted, public comment on any proposed policy before it is adopted also has a constitutional dimension.”

In a statement released following the judge's ruling, Oakland Privacy's research director Mike Katz-Lacabe explained the group's motivation for bringing the lawsuit: "to protect the rights of residents to learn about the surveillance equipment used by their local police and to make sure their elected officials provide meaningful oversight over equipment use.” He continued: “Senator Hill's 2015 legislation had those goals, and citizen's groups like ours are taking the next step to make sure that municipalities comply with state law..." Oakland Privacy and two Vallejo residents (Solange Echeverria, a journalist, and Dan Rubins, CEO of Legal Robot) filed the suit on May 21, 2020, requesting the judicial mandate for a public process per state law.

The City of Vallejo initially contested the lawsuit, but after a tentative ruling at the end of September in favor of Oakland Privacy, the City brought the policy back for a public hearing on October 27. On November 17, the policy returned for a second public hearing to address objections to the policy from Oakland Privacy, the ACLU of Northern California, and EFF. Among the changes were prohibitions against surveilling First Amendment-related activities and sharing data with federal immigration authorities, enhanced public logs, and Council oversight of software or hardware upgrades.

This is a significant victory not just for Oakland Privacy and the people of Vallejo. The power to decide whether these tools are acquired and, if so, how they are utilized should not stand unilaterally with agency executives. States, counties, cities, and transit agencies from San Francisco to Cambridge have adopted laws to ensure surveillance technology can't be acquired or used before a policy is put in writing and approved by an elected body—after they've heard from the affected public. We applaud Oakland Privacy for taking a stand against law enforcement circumventing democratic control over surveillance technologies used in our communities. 

Nathan Sheard

COVID-19 and Surveillance Tech: Year in Review 2020

2 months ago

Location tracking apps. Spyware to enforce quarantine. Immunity passports. Throughout 2020, governments around the world deployed invasive surveillance technologies to contain the COVID-19 outbreak.

But heavy-handed tactics like these undercut public trust in government, precisely when trust is needed most. They also invade our privacy and chill our free speech. And all too often, surveillance technologies disparately burden people of color.

In the United States, EFF and other digital rights advocates turned back some of the worst proposals. But they’ll be back in 2021. Until the pandemic ends, we must hold the line against ill-considered surveillance technologies.

Automated contact tracing apps

Contact tracing is a common public health response to contagious disease. In its traditional form, officials interview an infected person to determine who they had contact with, and then interview those people, too. Many have sought to automate this process with new technologies. But an app will not save us.

Some proposals would be simultaneously privacy-invasive and ineffective. For example, tracking our location with GPS or cell-site location information (CSLI) would expose whether we attended a union meeting or a BLM rally. That’s why police need a warrant to seize it. But it is not sufficiently granular to show whether two people were close enough to transmit the virus: the CDC recommends six feet of social distance, but CSLI is only accurate to a half mile and GPS to 16 feet. So EFF opposes location tracking. Yet some countries are using it.

Another approach is tracking our proximity to others by measuring Bluetooth signal strength. If two people install compatible proximity apps, and come close enough together to transmit the virus, then their apps will exchange digital tokens. Later, if one becomes ill, the other can be notified.

Proximity tracking might or might not help at the margins. It will be over-inclusive: two people standing a few feet apart might be separated by a wall. It also will be under-inclusive: many people don’t have smartphones, and many more won’t use a proximity app. Moreover, no app can fill the as-yet unmet need for traditional public health measures, such as testing, contact tracing, support for patients, PPE for health workers, social distancing, and wearing a mask.

Proximity apps must be engineered for privacy. Unfortunately, many are not. In a “centralized” model, the government has access to all the proximity data and can match it to particular people. This excessively threatens digital rights.

A better approach is Google Apple Exposure Notification (GAEN). It collects only ephemeral, random identifiers that are harder to correlate to particular individuals. Also, GAEN stores these identifiers in the users’ phones, unless a user tests positive, in which case they can upload the identifiers to a publicly accessible database. Public health authorities in many U.S. states and foreign nations sponsor GAEN-compliant apps.

Participation must be voluntary. Higher education, for example, must not require students, faculty, and staff to submit to automated contact tracing. We need laws that prohibit schools, workplaces, and restaurants from discriminating against people who do not use proximity tracking.

Surveillance to enforce quarantine

Some countries have used surveillance technologies to enforce home quarantine. These include compulsion to wear GPS-linked shackles, to download government spyware into personal phones, and to send the government selfies with time and place stamps.

EFF opposes such tactics. Compelled spyware unduly invades the right of individuals to autonomously control their smartphones. GPS shackles invade location privacy, cause pain, and trigger false alarms. Home selfies expose sensitive information, including grooming in private, presence of other people, and expressive effects such as books and posters.

Fortunately, governments in the United States largely have not used these tactics. The exception is a small number of cases involving people who tested positive and then allegedly broke stay-at-home instructions.

Immunity passports

Some have proposed “immunity passports” to screen people for entry to public places. The premise is that a person is not fit to enter a school, workplace, or restaurant until they can prove they have tested negative for infection or supposedly obtained immunity through past infection. Such systems may require a person to use their phone to display a digital credential at a doorway.

EFF opposes such systems. They would aggravate existing social inequities in access to smart phones, medical tests, and health treatment. Moreover, the display or transmission of credentials at doorways would create new infosec vulnerabilities. These systems also would be a significant step towards national digital identification that can be used to collect and store our personal information and track our movements. And inevitable system errors would needlessly block people from going to school or work.

Further, such systems would not advance public health. Tests of infectiousness have high rates of false negatives, and do not account for new infection after testing. Likewise, it remains unclear how much protection a past infection provides against a future infection.

Fortunately, California’s governor this fall vetoed a bill (A.B. 2004) that would have laid the groundwork for immunity passports. Specifically, it would have created a blockchain-based system of “verifiable health credentials” to report COVID-19 and other medical test results. EFF opposed it.

Processing our COVID-related data

While some of the worst ideas did not gain traction in 2020, the news is not all good. Governments and corporations are processing all manner of our COVID-related data, and existing laws do not adequately secure it.

States are conducting manual contact tracing, often contracting with business to build new data management systems. States also are partnering with businesses to create websites where we provide our health and other information to obtain screening for COVID-19 testing and treatment. Just as the U.S. Department of Health and Human Services expanded its processing of data about people who took COVID-19 tests, the federal government announced plans to share COVID-related data with its own corporate contractors, including TeleTracking Technologies and Palantir.

Businesses are also expanding their surveillance of workers. This occurs at job sites, in the name of tracking infection, and in socially distant home offices, in the name of tracking productivity.

There are many ways to misuse our COVID-related data. Companies might divert our COVID data to advertising. All this COVID data might be stolen by identify thieves, stalkers, and foreign nations. In New Zealand, a restaurant employee even used COVID data to send harassing messages to a customer.

Moreover, public health officials and their corporate contractors might share our COVID-related data with police and immigration officials. This would frustrate containment of the outbreak, because many people will share less of their personal information if they fear the government will use it against them. Yet in some communities, police are conducting contact tracing or obtaining public health data about the home addresses of patients. The outgoing administration even proposed deploying the National Guard to hospitals to process COVID-related personal data.

Existing data privacy laws do not adequately secure our COVID-related data. For example, HIPAA’s protections of health data apply only to narrowly defined healthcare providers and their business associations. This is one more illustration of why we need a comprehensive federal consumer data privacy law.

In the short run, we need COVID-specific data privacy legislation. But efforts to enact it have stalled in Congress and state legislatures.

Next steps

As pandemic fatigue sets in, the temptation will grow to try something—anything—even if it is unlikely to contain the virus and highly likely to invade our digital rights. So, we probably haven’t heard the last of location tracking apps, immunity passports, and spyware for patients. Other bad ideas may gain momentum, like dragnet COVID-19 surveillance with face recognition, thermal imaging, or drones. And we still need new privacy laws to lock down all of our COVID-related personal data.

Looking to 2021, we must remain vigilant.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Adam Schwartz

EFF to FinCEN: Stop Pushing For More Financial Surveillance

2 months ago

Today, EFF submitted comments to the Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) opposing the agency’s proposal for new regulations of cryptocurrency transactions. As we explain in our comments, financial records can be deeply personal and revealing, containing a trove of sensitive information about people’s personal lives, beliefs, and affiliations. Regulations regarding such records must be constructed with careful consideration regarding their effect on privacy, speech and innovation. 

Even in an increasingly digital world, people have a right to engage in private financial transactions.

FinCEN’s proposed rule is neither deliberative nor thoughtful. As we’ve written before, this rule—which would require regulated businesses to keep records of cryptocurrency transactions over $3,000 USD and to report cryptocurrency transactions over $10,000 to the government—would force cryptocurrency exchanges and other money services businesses to expand their collection of identity data far beyond what they must currently do. In fact, it wouldn’t only require these businesses to collect information about their own customers, but also the information of anyone who transacts with those customers using their own cryptocurrency wallets.  

In addition to the concerns we’ve already raised, EFF believes the proposed regulation as written would undermine the civil liberties of cryptocurrency users, give the government access to troves of sensitive financial data beyond what is contemplated by the regulation, and have unintended consequences for certain blockchain technology—such as smart contracts and decentralized exchanges—that could chill innovation. 

The agency has not provided nearly enough time to consider all of these risks properly. And, by announcing this proposal with a short comment period over the winter holiday, FinCEN’s process did not allow many members of the public and experts the necessary opportunity to provide feedback on the potentially enormous consequences of this regulation. 

That’s why EFF is urging the agency not to implement this proposal. We are instead asking that FinCEN meet directly with those affected by this regulation, including innovators, technology users, and civil liberties advocates to understand the effect it will have. And we’re calling on the agency to significantly extend the comment period to a minimum of 60 days, and offer additional time for comments after any adjustments are made to the proposed regulation. 

This Rushed Proposal Threatens Financial Privacy, Speech, and Innovation

Even in an increasingly digital world, people have a right to engage in private financial transactions. These protections are crucial. We’ve seen protestors and dissidents in Hong Kong, Belarus, and Nigeria make deliberate choices to use cash or cryptocurrencies to protect themselves against surveillance. The ability to transact anonymously allows people to engage in political activities, protected in the U.S. by the First Amendment, which may be sensitive or controversial. Anonymous transactions should be protected whether those transactions occur in the physical world with cash or online. 

The proposal would require businesses to collect far more information than is necessary to achieve the agency’s policy goals. The proposed regulation purports to require cryptocurrency transaction data to be provided to the government only when the amount of the transactions exceed a particular threshold. However, because of the nature of public blockchains, the regulation would actually result in the government gaining troves of data about cryptocurrency users far beyond what the regulation contemplates. 

Bitcoin addresses are pseudonymous, not anonymous—and the Bitcoin blockchain is a publicly viewable ledger of all transactions between these addresses. That means that if you know the name of the user associated with a particular Bitcoin address, you can glean information about all of their Bitcoin transactions that use that address. In other words, the proposed regulation would provide the government with access to a massive amount of data beyond just what the regulation purports to cover.

That scale of such collection introduces considerable risk. Databases of this size can become a honeypot of information that tempts bad actors, or those who might misuse it beyond its original intended use. Thousands of FinCEN’s own files have already been exposed to the public, making it clear that FinCEN’s security protocols are not adequate to prevent even large-scale leakage. This is, of course, not the first time that a sensitive government database has been leaked, mishandled, or otherwise breached. Over the past several weeks, the SolarWinds hack of U.S. government agencies has made headlines, and details are still emerging—and this is hardly the only example of a large-scale government hack. 

There are also significant Fourth Amendment concerns. As we argue in our comments:

    The proposed regulation violates the Fourth Amendment’s protections for individual privacy. Our society’s understanding of individual privacy and the legal doctrines surrounding that privacy are evolving. While 1970s-era court opinions held that consumers lose their privacy rights in the data they entrust with third parties, modern courts have become skeptical of these pre-digital decisions and have begun to draw different boundaries around our expectations of privacy. Acknowledging that our world is increasingly digital and that surveillance has become cheaper and more ubiquitous, the Supreme Court has begun to chip away at the third-party doctrine—the idea that an individual does not have a right to privacy in data shared with a third party. Some Supreme Court Justices have written that “it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.” In 1976, the Supreme Court pointed to the third-party doctrine in holding in U.S. v. Miller that the then-existing Bank Secrecy Act reporting requirements did not violate the Fourth Amendment. 

Two developments make continued reliance on the third-party doctrine suspect, including as the source for regulations such as those contemplated here. 

First, since the Miller decision, the government has greatly expanded the Bank Secrecy Act’s reach and its intrusiveness on individual financial privacy. Although the Supreme Court upheld the 1970s regulations in an as-applied challenge, Justice Powell, who authored Miller, was skeptical that more intrusive rules would pass constitutional muster. In California Bankers Association v. Shultz, Justice Powell wrote, “Financial transactions can reveal much about a person's activities, associations, and beliefs. At some point, governmental intrusion upon these areas would implicate legitimate expectations of privacy.” Government intrusion into financial privacy has dramatically increased since Miller and Shultz, likely intruding on society’s legitimate expectations of privacy and more directly conflicting with the Fourth Amendment.

Second, since Miller, we have seen strong pro-privacy opinions issued from the U.S. Supreme Court in multiple cases involving digital technology that reject the government’s misplaced reliance on the third-party doctrine. This includes: U.S. v. Jones (2012), in which the Court found that law enforcement use of a GPS location device to continuously track a vehicle over time was a search under the Fourth Amendment; Riley v. California (2014), in which the Court held that warrantless search and seizure of the data on a cell phone upon arrest was unconstitutional; and Carpenter v. U.S., in which the Court held that police must obtain a warrant before accessing cell site location information from a cell phone company. EFF is heartened to see these steps by the courts to better recognize that Americans do not sacrifice their privacy rights when interacting in our modern society, which is increasingly intermediated by corporations holding sensitive data. We believe this understanding of privacy can and should extend to our financial data. We urge FinCEN to heed the more nuanced understanding of privacy rights seen in modern court opinions, rather than anchoring its privacy thinking in precedents from a more analog time in America’s history. 

Finally, we urge FinCEN to consider the potential chilling effects its regulation could have on developing technologies. FinCEN should be extremely cautious about crafting regulation that could interfere with the growing ecosystem of smart contract technology, including decentralized exchanges. We are in the very earliest days of the exploration of smart contract technology and decentralized exchanges. Just as it would have been an error to see the early Internet as merely an extension of the existing postal service, it is important not to view the risks and opportunities of these new technologies solely through the lens of financial services. The proposed regulation would not only chill experimentation in a field that could have many potential benefits for consumers, but would also prevent American users and companies from participating when those systems are deployed in other jurisdictions.

Because of the proposed regulation’s potential impact on the civil liberties interests of technology users and potential chilling effect on innovation across a broad range of technology sectors, we urge FinCEN not to implement this proposal as it stands. Instead, we ask that it does its due diligence to ensure that civil liberties experts, innovators, technology users, and the public have an opportunity to voice their concerns about the potential impact of the proposal.

Read EFF’s full comments

Related Cases: Riley v. California and United States v. WurieCarpenter v. United States
Hayley Tsukayama

EFF Statement on British Court’s Rejection of Trump Administration’s Extradition Request for Wikileaks’ Julian Assange

2 months ago

Today, a British judge denied the Trump Administration’s extradition request for Wikileaks Editor Julian Assange, who is facing charges in the United States under the Espionage Act and the Computer Fraud and Abuse Act. The judge largely confirmed the charges against him, but ultimately determined that the United States’ extreme procedures for confinement that would be applied to Mr. Assange would create a serious risk of suicide.

EFF’s Executive Director Cindy Cohn said in a statement today:

“We are relieved that District Judge Vanessa Baraitser made the right decision to reject extradition of Mr. Assange and, despite the U.S. government’ initial statement, we hope that the U.S. does not appeal that decision. The UK court decision means that Assange will not face charges in the United States, which could have set dangerous precedent in two ways. First, it could call into question many of the journalistic practices that writers at the New York Times, the Washington Post, Fox News, and other publications engage in every day to ensure that the American people stay informed about the operations of their government. Investigative journalism—including seeking, analyzing and publishing leaked government documents, especially those revealing abuses—has a vital role in holding the U.S. government to account. It is, and must remain, strongly protected by the First Amendment. Second, the prosecution, and the judge’s decision, embraces a theory of computer crime that is overly broad -- essentially criminalizing a journalist for discussing and offering help with basic computer activities like use of rainbow tables and scripts based on wget, that are regularly used in computer security and elsewhere.

While we applaud this decision, it does not erase the many years Assange has been dogged by prosecution, detainment, and intimidation for his journalistic work. It also does not erase the government’s arguments that, as in so many other cases, attempts to cast a criminal pall over routine actions because they were done with a computer. We are still reviewing the judge’s opinion and expect to have additional thoughts once we’ve completed our analysis.”

Read the judge’s full statement.

Related Cases: Bank Julius Baer & Co v. Wikileaks
rainey Reitman

Video Hearing Tuesday: ACLU, EFF Urge Court to Require Warrants for Border Searches of Digital Devices

2 months ago
Appeals Court Should Uphold Fourth Amendment Rights for International Travelers

Boston – The American Civil Liberties Union (ACLU), the Electronic Frontier Foundation (EFF), and the ACLU of Massachusetts will urge an appeals court on Tuesday to require warrants for the government to search electronic devices at U.S. airports and other ports of entry—ensuring that the Fourth Amendment protects travelers as they enter the country. The hearing is at 9:30 a.m. ET/6:30 a.m. PT on January 5, and is available to watch by livestream.

In 2017, ten U.S citizens and one lawful permanent resident who regularly travel outside of the country with cell phones, laptops, and other electronic devices sued the Department of Homeland Security for illegal searches of their devices when they reentered the country. The suit, Alasaad v. Wolf, challenged the government’s practice of searching travelers’ electronic equipment without a warrant and usually without any suspicion that the traveler is guilty of wrongdoing.

In a historic win for digital privacy, a federal district court judge ruled in Alasaad that suspicionless electronic device searches at U.S. ports of entry violate the Fourth Amendment. The court required that border agents have reasonable suspicion that a device contains digital contraband before searching or seizing it. At Tuesday’s hearing at the U.S. Court of Appeals for the First Circuit, ACLU attorney Esha Bhandari will argue that the Constitution requires a warrant based on probable cause to search our electronic devices at the border —just as is required everywhere else in the United States.

WHAT:
Hearing in Alasaad v. Wolf

WHEN:
Tuesday, January 5
9:30 a.m. ET/6:30 a.m PT

WHERE:
https://www.youtube.com/channel/UCiq_Kg0zEPrjMFK_s-KP5_g/

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org
Rebecca Jeschke

A Smorgasbord of Bad Takedowns: 2020 Year in Review

2 months ago

Here at EFF, we take particular notice of the way that intellectual property law leads to expression being removed from the Internet. We document the worst examples in our Takedown Hall of Shame. Some, we use to explain more complex ideas. And in other cases, we offer our help.

In terms of takedowns, 2020 prefaced the year to come with a January story from New York University School of Law. The law school posted a video of a panel titled “Proving Similarity,” where experts explained how song similarity is analyzed in copyright cases. Unsurprisingly, that involved playing parts of songs during the panel. And so, the video meant to explain how copyright infringement is determined was flagged by Content ID, YouTube’s automated copyright filter.

While the legal experts at, let’s check our notes, NYU Law were confident this was fair use, they were less confident that they understood how YouTube’s private appeals system worked. And, more specifically, whether challenging Content ID would lead to NYU losing its YouTube channel. They reached out privately to ask questions about the system, but got no answers. Instead, YouTube just quietly restored the video.

And with that, a year of takedowns was off. There was Dr. Drew Pinsky’s incorrect assessment that copyright law let him remove a video showing him downplaying COVID-19. A self-described Twitter troll using the DMCA to remove from Twitter an interview he did about his tactics and then using the DMCA to remove a photo of his previous takedown. And, when San Diego Comic Con went virtual, CBS ended up taking down its own Star Trek panel.

On our end, we helped Internet users push back on attempts to use IP claims as a tool to silence critics. In one case, EFF helped a Redditor win a fight to stay anonymous when Watchtower Bible and Tract Society, a group that publishes doctrines for Jehovah’s Witnesses, tried to learn their identity using copyright infringement allegations.

We also called out some truly ridiculous copyright takedowns. One culprit, the ironically named No Evil Foods, went after journalists and podcasters who reported on accusations of union-busting, claiming copyright in a union organizer’s recordings of anti-union presentations by management. We sent a letter telling them to knock it off: if the recorded speeches were even copyrightable, which is doubtful, this was an obvious fair use, and they were setting themselves up for a lawsuit under DMCA section 512(f), the provision that provides penalties for bad-faith takedowns. The takedowns stopped after that.

Another case saw a university jumping on the DMCA abuse train. Nebraska’s Doane University used a DMCA notice to take down a faculty-built website created to protest deep academic program cuts, claiming copyright in a photo of the university. One problem: that photo was actually taken by an opponent of the cuts, specifically for the website. The professor who made the website submitted a counternotice, but the university’s board was scheduled to vote on the cuts before the DMCA’s putback waiting period would expire. EFF stepped in and demanded that Doane withdraw its claim, and it worked—the website was back up before the board vote.

Copyright takedowns aren’t the only legal tool we see weaponized against online speech—brands are just as happy to use trademarks this way. Sometimes that can take the form of a DMCA-like takedown request, like the NFL used to shut down sales of “Same Old Jets” parody merchandise for long-suffering New York Jets fans. In other cases, a company might use a tool called the Uniform Domain-Name Dispute-Resolution Policy (UDRP) to take over an entire website. The UDRP lets a trademark holder take control of a domain name if it can convince a private arbitrator that Internet users would think it belonged to the brand and that the website owner registered the name in “bad faith,” without a legitimate interest in using it.

This year, we helped the owner of instacartshoppersunited.com stand up to a UDRP action and hold on to her domain name. Daryl Bentillo was frustrated by her experience as an Instacart shopper and registered that domain name intending to build a site that would help organize shoppers to advocate for better pay practices. But before she even had a chance to get started, Ms. Bentillo got an email saying that Instacart was trying to take her domain name away using this process she’d never heard of. That didn’t sit right with us, so we offered our help. We talked to Instacart’s attorneys about how Ms. Bentillo had every right to use the company’s name this way to refer to it (called a nominative fair use in trademark-speak)—and about how it sure looked like they were just using the UDRP process to shut down organizing efforts. Instacart was ultimately persuaded to withdraw its complaint.

Back in copyright land, we also dissected the problem of the RIAA’s takedown of youtube-dl, a popular tool for downloading videos from Internet platforms. Youtube-dl didn’t infringe on RIAA’s copyright, but the RIAA made the takedown claiming that because DMCA 1201 says that it’s illegal to bypass a digital lock in order to access or modify a copyrighted work and that youtube-dl could be used to download RIAA-member music, it should be removed.

RIAA and other copyright holders have argued that it’s a violation of DMCA 1201 to bypass DRM even if you’re doing it for completely lawful purposes; for example, if you’re downloading a video on YouTube for the purpose of using it in a way that’s protected by fair use.

Trying to use the notice-and-takedown process against a tool that does not infringe on any music label’s copyright and has lawful uses was an egregious abuse of the system, and we said so.

And to bring us full circle: we end with a case where discussing copyright infringement brought a takedown. Lindsay Ellis, a video creator, author, and critic, created a video called “Into the Omegaverse: How a Fanfic Trope Landed in Federal Court,” dissecting a story where one author, Addison Cain, has sent numerous takedowns to platforms with dubious copyright claims. Eventually, one of the targets sued and the question of who owns what in a genre that developed entirely online ended up in court. It did not take long for Cain to send a series of takedowns against this video about her history of takedowns.

That’s when EFF stepped in. The video is a classic fair use. It uses a relatively small amount of a copyrighted work for purposes of criticism and parody in an hour-long video that consists overwhelmingly of Ellis’ original content. In short, the copyright claims (and the other, non-copyright claims) were deficient. We were happy to explain this to Cain and her lawyer.

It's been an interesting year for takedowns. Some of these takedowns involved automated filters, a problem we dived deep into with our whitepaper Unfiltered: How YouTube’s Content ID
Discourages Fair Use and Dictates What We See Online. Filters like Content ID not only remove lots of lawful expression; they also sharply restrict what we do see. Remember: if you encounter problems with bogus legal threats, DMCA takedowns, or filters, you can contact EFF at info@eff.org.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Cara Gagliano

Banning Government Use of Face Recognition Technology: 2020 Year in Review

2 months ago

If there was any question about the gravity of problems with police use of face surveillance technology, 2020 wasted no time in proving them dangerously real. Thankfully, from Oregon to Massachusetts, local lawmakers responded by banning their local governments' use.

The Alarm 

On January 9, after first calling and threatening to arrest him at work, Detroit police officers traveled to nearby Farmington Hills to arrest Robert Williams in front of his wife, children, and neighbors—for a crime he did not commit. He was erroneously connected by face recognition technology that matched an image of Mr. Williams with video from a December 2018 shoplifting incident. Later this year, Detroit police erroneously arrested a second man because of another misidentification by face recognition technology.

For Robert Williams, his family, and millions of Black and brown people throughout the country, the research left the realm of the theoretical and became all too real. Experts at MIT Media Lab, the National Institute of Standards and Technology, and Georgetown's Center on Privacy and Technology have shown that face recognition technology is riddled with error, especially for people of color. It is one more of a long line of police tools and practices that exacerbate historical bias in the criminal system.

The Response 

2020 will undoubtedly come to be known as the year of the pandemic. It will also be remembered for unprecedented Black-led protest against police violence and concerns that surveillance of political activity will chill our First Amendment rights. Four cities joined the still-growing list of communities that have stood up for their residents' rights by banning local government use of face recognition. Just days after Mr. Williams' arrest, Cambridge, MA—an East Coast research and technology hub–became the largest East Coast City to ban government use of face recognition technology. It turned out to be a distinction they wouldn't retain long.

In February and March, Chicago and New York City residents and organizers called on local lawmakers to pass their own bans. However, few could have predicted that a month later, organizing, civic engagement, and life as we knew it would change dramatically. As states and municipalities began implementing stay in place orders to suppress an escalating global pandemic, City Councils and other lawmaking bodies adapted to social distancing and remote meetings.

As those of us privileged enough to work from home adjusted to Zoom meetings, protests in the name of Breonna Taylor and George Floyd spread throughout the country.

Calls to end police use of face recognition technology were joined by calls for greater transparency and accountability. Those calls have not yet been answered with a local ban on face recognition in New York City.

As New Yorkers continue to push for a ban, one enacted bill will shine the light on NYPD use of all manner of surveillance technology. That light of transparency will inform lawmakers and the public of the breadth and dangers of NYPD's use of face recognition and other privacy-invasive technology. After three years of resistance from the police department and the mayor, New York's City Council passed the POST Act with a veto-proof majority. While lacking the community control measures in stronger surveillance equipment ordinances, the POST Act requires the NYPD to publish surveillance impact and use policies for each of its surveillance technologies. This will end decades of the department's refusal to disclose information and policies about its surveillance arsenal.

TAKE ACTION

End Face Surveillance in your community

Building on the momentum of change driven by political unrest and protest–and through the tireless work of local organizers including the ACLU-Massachusetts–just days after New York's City Council passed the POST Act, Boston's City Council took strong action. It voted unanimously to join neighboring Cambridge in protecting their respective residents from police use of face recognition. In the preceding weeks, EFF advocated for, and council members accepted, improvements to the ordinance. One closed a loophole that might have allowed police to ask third parties to collect face recognition evidence for them. Another change provides attorney fees to a person who brings a successful suit against the City for violating the ban.

Not to be outdone by their peers in California and Massachusetts, 2020 was also the year municipal lawmakers in Oregon and Maine banned their own agencies from using the technology. In Portland, Maine, the City Council voted unanimously to ban the technology in August. Then in November, the City's voters passed the first ballot measure prohibiting government use of face recognition.

Across the country, the Portland, Oregon, City Council voted unanimously in September to pass their government ban (as well as a ban on private use of face recognition in places of public accommodation). In the days leading up to the vote, a coalition organized by PDX Privacy, an Electronic Frontier Alliance member, presented local lawmakers with a petition signed by over 150 local business owners, technologists, workers, and residents for an end to government use of face surveillance.

TAKE ACTION

End Face Surveillance in your community

Complimenting the work of local lawmakers, federal lawmakers are stepping forward. Senators Jeff Merkley and Jeff Markey), and Representatives Ayanna Pressley, Pramila Jayapal, Rashida Tlaib, and Yvette Clarke introduced the Facial Recognition and Biometric Technology Moratorium Act of 2020 (S.4084/H.R.7356). If passed, it would ban federal agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Patrol from using face recognition to track and identify (and misidentify) millions of U.S. residents and travelers. The act would also withhold certain federal funding from local and state governments that use face recognition.

What's next? 

While some high-profile vendors this year committed to pressing pause on the sale of face recognition technology to law enforcement, 2020 was also a year where the public became much more familiar with how predatory the industry can be. Thus, through our About Face campaign and work of local allies, EFF will continue to support the movement to ban all government use of face recognition technology.

With a new class of recently elected lawmakers poised to take office in the coming weeks, now is the time to reach out to your local city council, board of supervisors, and state and federal representatives. Tell them to stand with you in ending government use of face recognition, a dangerous technology with a proven ability to chill essential freedoms and amplify systemic bias. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Nathan Sheard

DNS, DoH, and ODoH, Oh My: Year-in-Review 2020

2 months ago

Government knowledge of what sites activists have visited can put them at risk of serious injury, arrest, or even death. This makes it a vitally important priority to secure DNS. DNS over HTTPS (DoH) is a protocol that encrypts the Domain Name System (DNS) by performing lookups over the secure HTTPS protocol. DNS translates human-readable domain names (such as eff.org) into machine-routable IP addresses (such as 173.239.79.196), but it has traditionally done this via cleartext queries over UDP port 53 (Do53). This allows anyone who can snoop on your connection—whether it’s your government, your ISP, or the hacker next to you on the same coffee shop WiFi—to see what domain you’re accessing and when.

In 2019, the effort to secure DNS through DoH made tremendous progress both in terms of the deployment of DoH infrastructure and in the Internet Engineering Task Force (IETF), an Internet governance body tasked with standardizing the protocols we all rely on. This progress was made despite large pushback by the Internet Service Providers’ Association in the UK, citing difficulties DoH would present to British ISPs, which are mandated by law to filter adult content.

2020 has also seen great strides in the deployment of DNS over HTTPS (DoH). In February, Firefox began the rollout of DoH to its users in the US, using Cloudflare’s DoH infrastructure to provide lookups by default. Google’s Chrome browser followed suit in May by switching users to DoH if their DNS provider supports it. Meanwhile, the list of publicly available DoH resolvers has expanded to the dozens, many of which implement strong privacy policies, such as not keeping connection logs.

This year’s expansion of DoH deployments has alleviated some of the problems critics have cited, such as the centralization of DoH infrastructure. Previously, only a few large Internet technology companies like Cloudflare and Google had deployed DoH servers at scale. This facilitated these companies’ access to large troves of DNS query data, which could theoretically be exploited to mine sensitive data on DoH users. Mozilla has sought to protect their Firefox users from this danger by requiring the browser’s DoH resolvers to observe strict privacy practices, outlined in their Trusted Recursive Resolver (TRR) policy document. Comcast joined Mozilla’s TRR partners Cloudflare and NextDNS in June.

In addition to policy and deployment strategies to alleviate the privacy concerns of DoH infrastructure centralization, a group of University of Washington academics and Cloudflare technologists published a paper late last month proposing a new protocol called Oblivious DNS over HTTPS (ODoH). The protocol introduces a proxy node to the DoH network layout. Instead of directly requesting records via DoH, a client creates a request for the DNS record, along with a symmetric key of their choice. The client then encrypts the request and symmetric key to the public key of the DoH server they wish to act as a resolver. The client sends this request to the proxy, along with the identity of the DoH resolver they wish to use. The proxy removes all identifying pieces of information from the request, such as the requester's IP address, and forwards the request to the resolver. The resolver decrypts the request and symmetric key, recursively resolves the request, encrypts the response to the symmetric key provided, and sends it back to the ODoH proxy. The proxy forwards the encrypted response to the client, which is then able to decrypt it using the symmetric key it has retained in memory, and retrieve the DNS response. At no point does the proxy see the unencrypted request, nor does the resolver ever see the identity of the client.

ODoH guarantees that, in the absence of collusion between the proxy and the resolver, no one entity is able to determine both the identity of the requester and the content of the request. This is important because if powerful entities (whether it be your government, ISP, or even DNS resolver) know which people accessed what domain (and when), it gives that entity enormous power over those people. ODoH gives users a technological way to ensure that their domain lookups are secure and private so long as they trust that the proxy and the resolver do not join forces. This is a much lower level of trust than trusting that a single entity does not misuse the DNS queries you send them.

Looking ahead, one possibility worries us: using ODoH gives software developers an easy way to comply with the demands of a censorship regime in order to distribute their software without telling the regime the identity of users they’re censoring. If a software developer wished to gain distribution rights in Saudi Arabia or China, for example, they could choose a reputable ODoH proxy to connect to a resolver that refuses to resolve censored domains. A version of their software would be allowed for distribution in these countries, so long as it had a censorious resolver baked in. This would remove any potential culpability that software developers have for revealing the identity of a user to a government that can put them in danger, but it also facilitates the act of censorship. In traditional DoH, this is not possible. Giving developers an easy-out by facilitating “anonymous” censorship is a worrying prospect.

Nevertheless, the expansion of DoH infrastructure and conceptualization of ODoH is a net win for the Internet. Going into 2021, these developments give us hope for a future where our domain lookups will universally be both secure and private. It’s about time.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Bill Budington

Defending Your Rights in Every Reality: Year in Review 2020

2 months ago

Virtual reality and augmented reality technologies (VR/AR) are rapidly maturing and becoming more prevalent to a wider audience, especially as the pandemic drives more people to virtual activities. This technology provides the promise to entertain and educate, to connect and enhance our lives, and even to help advocate for our rights. But it also raises the risk of eroding them online. The headsets and devices can gather deeply personal information about you and the world around you, and the VR/AR services often store this data within their servers. And by introducing artificial intelligence to VR/AR, the privacy and security hazards are augmented, too, as the devices and services gather and analyze data at an unprecedented level. 

VR/AR devices let us enter a virtual world or see the real world with overlayed digital content, but at a price.

2020 confirmed that VR/AR users care about their privacy. Millions of users reacted furiously against Facebook when it announced it would force Oculus users to log into their headset with a Facebook.com account within two years or potentially brick their device. (All Oculus Quest 2 users can only sign up using a Facebook account.) In If Privacy Dies in VR, It Dies in Real Life, we explained why Oculus users might not want to use their real names. Without anonymity, Oculus leaves vulnerable users out to dry, such as VR activists in Hong Kong and LGBTQ+ users who cannot safely reveal their identity. 

VR/AR devices let us enter a virtual world or see the real world with digital overlay content, but at a price. VR/AR devices track our environment and intimate details about our life. A Stanford Research Study explained that Virtual Reality needs the system to measure body movements because the content responds accordingly: 

For example, in VR, people turn their physical head around to make eye contact with other virtual reality users, use their legs to walk in the physical room to get across a virtual room, and move their physical arms to grasp virtual objects. These tracking data can be recorded and stored for later examination (...) With VR, in addition to recording personal data regarding people’s location, social ties, verbal communication, search queries, and product preferences, technology companies will also collect nonverbal behavior—for example, users’ posture, eye gaze, gestures, facial expressions, and interpersonal distance.

Linking various Facebook services under a "unified login account" also raised serious concerns about the consolidation of Facebook data collection practices across Facebook products. Facebook already has a vast collection of data gathered from across the web and from your devices. Combining this with sensitive body-data and biometric identifiers detected by the headsets (including our interactions and reactions to objects and people) can further cement Facebook’s monopolistic power in the online advertising ecosystem. In the European Union, forcing a user to sign up with a Facebook account may also run afoul of the “coupling prohibition” under the European Data Protection Regulation, which states that making a service dependent upon consent to something that has nothing to do with the service means consent is not, actually, voluntary.

As the use of augmented reality has grown, so have the promises and dangers it poses.

Earlier this year, we called upon antitrust enforcers to address yet another Facebook broken promise about privacy. And they did. Recently Germany's competition regulator started examining the linkage between Oculus hardware and the rest of the Facebook platform. (In 2019, the same regulator prohibited Facebook from extensively collecting and merging user data from different sources.) 

VR/AR devices, which include cameras, microphones, and sensors, help us interact with the real world (and ensure you do not crash into your table). That means information about your environment, such as your home, office, or even your community, can also be collected and shared to target advertisements to you. And once collected, all this information is potentially available to the government. Even if you never use this equipment, sharing a space with someone who uses it may even put your privacy at risk.

VR/AR devices in your home can also collect all-encompassing audio and video, along with telemetry about your movements, depth data, and images. This data can be used to build a highly accurate geometrical representation of your home. In Come Back with a Warrant for my Virtual House, we explained why the government must not get warrantless access to this sensitive information, even when a third-party AR/VR provider holds that information. Our analysis builds on the landmark Supreme Court case Carpenter v. United States, which held that accessing historical records containing the cellphones' physical locations requires a search warrant, even though they were held by a third-party. We also are protected by the longstanding rule in Kyllo v. United States: when new technologies that can “explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a 'search' and is presumptively unreasonable without a warrant.” 

As the use of augmented reality has grown, so have the promises and dangers it poses. Glasses that augment reality may also mean a wearer could be recording your conversations while mapping the environment around you in precise detail and real-time. As we explained in Augmented Reality Must Have Augmented Privacy, if these technologies are massively adopted, AR recording's scope and scale could give rise to a global panopticon of constant surveillance in public or semi-public spaces. 

Recognizing that people have historically enjoyed effective anonymity and privacy when in these spaces, we explained how the U.S. Constitution and international human rights law require the government to obtain a warrant to access the records generated by augmented reality, and require tech companies to respect and protect their users’ right to data privacy. Specifically:

Companies must, therefore, collect, use, and share their users’ AR data only as minimally necessary to provide the specific service their users asked for. Companies should also limit the amount of data transited to the cloud, and the period it is retained, while investing in robust security and strong encryption, with user-held keys, to give users control over information collected. Moreover, we need strong transparency policies, explicitly stating the purposes for and means of data processing, and allowing users to securely access and port their data.

Augmented reality may pose unprecedented dangers of a dystopian future. But with strong policies, robust transparency, wise courts, modernized statutes, and privacy-by-design engineering, we can hold back that dystopia and reap the rewards of this technology.

Looking forward, we plan to delve into the privacy and data protection risks associated with the broad amount of information collected about our biometrics and body data, our fitness levels (and our vitals), and our “biometric inferred data.” This technology has the potential to monitor the tone of our voice, our facial and gaze expressions, our heartbeats, and body temperature. It can track our unconscious responses that our body makes, like when we blink, where we look, and our attention span. With machine learning, providers can use the data to infer attitudes, emotions, personality traits, preferences, mental health, cognitive processes, skills, and advertisements’ effectiveness. Fitness and health apps already ask users to input their feelings, and some are embarking on a tone of voice analysis. Police have leveraged Fitbit’s heart rate data in a criminal investigation. Law enforcement is incorporating AI into a vast range of criminal investigative contexts, with troubling implications. As with every new technology, police should not abuse this technology before we enact (and enforce) necessary privacy laws. 

Companies’ continued efforts to quantify our public, social, and inner lives will profoundly impact our daily lives in the years ahead. By combining VR/AR with machine learning and continually expanding their scope beyond our behavior on devices and into the physical environment, VR/AR can shape a world where humans can be more vulnerable to these companies’ influence—and governments’ pressure. But with proper safeguards and legal restrictions, a different, and better, reality is possible.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Katitza Rodriguez

Questions Remain About Pretrial Risk-Assessment Algorithms: Year in Review 2020

2 months ago

Californians in November voted to repeal a 2018 law that would have ended cash bail and replaced it with a digital pretrial risk assessment tool that dictates whether a person can be released as they await their trial. By voting No on Proposition 25, Californians voted to keep the cash bail system, and not replace it with automated pretrial risk assessments.

EFF did not have a position on the ballot measure. Much like the ACLU of Northern California, EFF believed this proposition—no matter its outcome—does not create a fair pretrial system. However, EFF has done extensive research on pretrial risk assessment algorithms, and worked to prevent the deployment of unfair tools through the legislature

Pretrial risk assessment tools come with their own risks and potential pitfalls, and it is vital that Californians consider what’s required to address their potential harms. While the proposition failed, the fact is that these tools are currently in use in 49 of California’s 58 counties as part of their bail system, according to a December 2019 report from the Public Policy Institute of California

There are many reasons to be concerned about replacing cash bail with an algorithm that categorizes people as low-, medium-, or high- risk before releasing some, and leaving others in jail. 

Digital pretrial risk assessment generates similar concerns as predictive policing. Both rely on data generated by a racially biased criminal justice system in order to make determinations of who is a threat to society and who is not. This can have a devastating impact on people’s lives, their well-being, and that of their families. In the case of predictive policing, the algorithm could flag someone as a risk and subject them to near constant police harassment. In the case of risk assessment, if a person has been flagged as unreleasable, they may sit in jail for months or years awaiting their trial for no other reason.

Some see risk assessment tools as being more impartial than judges because they make determinations using algorithms. But that assumption ignores the fact that algorithms that are given biased data or not carefully developed can cause the same sort of discriminatory outcomes as existing systems that rely on human judgment—and even make new, unexpected errors or entrench systemic bias with a new veneer of supposed impartiality. 

This system creates a scenario in which some people, arbitrarily or discriminatorily picked out by computer as being high risk, are left to sit in jail without ever knowing what data dictated the outcome. This is not a merely theoretical concern. Researchers at Dartmouth University found in January 2018 that one widely used tool, COMPAS, incorrectly classified black defendants as being at risk of committing a misdemeanor or felony within 2 years at a rate of 40%, versus 25.4% for white defendants. Computers, especially ones operating on flawed or biased data, should not dictate who gets to go home and who does not or be given undeserved weight in a judge’s decision about release.

Any digital tool hoping to replace cash bail should provide the public with straightforward answers concerning what data is factored in decision making before they are used in a courtroom. EFF has previously submitted comments to the California Judicial Council outlining our recommendations for guardrails that should be placed on such tools, if court systems must use them. These include: How much transparency will there be about how the algorithm functions and what data went into its development? And, will there be an appeals process for people who feel as if their case has not been fairly adjudicated? In what instances could or would a judge overturn advice given to them by the assessment tool? 

The pandemic is disproportionately hurting incarcerated individuals across the country. Pretrial risk assessment tools should not be used unless they are equitable, transparent, and fair—and there are deep challenges to overcome before any tool can make those claims. People need to know if the assessment tool deciding who gets to leave jail will also condemn individuals to imprisonment without reprieve in an arbitrary or racialized way.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Matthew Guariglia

Litigation Against Mass NSA Surveillance: Year in Review 2020

2 months ago

EFF takes on fights for the long run, and some of our longest running legal fights are focused on bringing the National Security Agency within the rule of law, and batting back the arguments that the state secrets privilege should insulate the government from accountability for spying. EFF started these cases in 2006 with Hepting v. AT&T. That’s 7 years before Mr. Snowden provided us (and the world) with the irrefutable proof of the NSA’s activities impacting American telecommunications customers like our stalwart clients.  Along with our current case, Jewel v. NSA, by the time 2020 rolled around we had been at this for 14 years. If the NSA thought they could wait us out, they were sorely mistaken.  

The Jewel v. NSA case arises from general seizures and searches conducted through three NSA surveillance programs: the NSA’s current Upstream tapping of the Internet backbone, its past actions collecting Internet metadata and its discontinued mass telephone records collection, purportedly authorized by section 215 of the Patriot Act. Two of the three programs have now been stopped or changed significantly, one by Congress and another by the government itself after Senator Wyden and others raised concerns. Stopping the third is key, as is ensuring that none of them can be restarted.

In 2020 we were hopeful to have a decision from the Ninth Circuit rejecting the government’s argument that its overblown secrecy claims should nullify our cases.  Obviously the whole world knows about these programs -- they’ve been discussed in Congress in public and by the European Courts. EFF Pioneer Award winner Laura Pointras even won an Oscar for her documentary about them. Even still, the case continues. In early November, our amazing volunteer attorney Richard Wiebe got to make the argument before a three-judge panel-- and we  still await a decision.  

The slow pace makes it clear that we need additional and real reform of the state secrets privilege as well as an overhaul of the NSA’s activities.

In 2020, there was some good judicial news involving NSA secrecy. In Fazaga v. FBI the Court has twice confirmed that the state secrets privilege cannot block a case arising out of electronic surveillance. This builds on the 2019 Ninth Circuit ruling that the government cannot claim national security secrecy over facts that are not secret.  With these two rulings, the stage is set for the court to reject the government’s outrageous secrecy position in our electronic surveillance case too.

Also in 2020 the Ninth Circuit joined with the Second Circuit by declaring that the call detail records (CDR) program, which swept up basic information about phone calls of people in and outside of the United States likely “violated the Fourth Amendment when it collected the telephony metadata of millions of Americans.” The Court, in a case called Maolin, also found that the collection likely violated the FISA law.  In doing so, it noted that the NSA had effectively lied to Congress and the American people when it claimed that the CDR program had been an important piece in stopping the underlying crime, a violation of U.S. sanctions by Somali immigrant who sent money back home.  This confirms what we’ve found behind so many of the government's claims about these secret programs --  nothing. No terrorists stopped, no serious crimes solved, no increased security for Americans in exchange for the loss of our privacy and the massive costs we’ve incurred.  

We did suffer a huge loss in our battles against NSA spying this year.  Our longtime colleague Jim Tyre, passed away in March.  Jim had worked with us on these cases from the beginning and we missed him terribly as we prepared for the Ninth Circuit argument.  But we know he would want us to push on and when we do win, it will be another piece of his already brilliant legacy.  

These fights are frustrating, but they are worth fighting and the steadfast support of our members is what makes EFF strong for the long run.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Related Cases: Jewel v. NSA
Cindy Cohn

Fighting Abusive Patent Litigation During a Year of Health Crisis: 2020 Year In Review

2 months 1 week ago

The coronavirus dominated the news cycles, and our personal lives, in 2020. Scientists around the world raced forward to create a vaccine. Alongside that massive effort to create a critical new invention, we saw a renewed debate about patents and their role in helping, or hindering, innovation.  

At EFF, we’ve been the watchdog for years when patent owners abuse their monopolies. Unfortunately but predictably, some patent owners actually saw the rise of the COVID-19 health emergency as a business opportunity. 

Shortly after the outbreak of the novel coronavirus, a patent troll called Labrador Diagnostics used patents to sue a company that makes COVID-19 tests. Even worse: Labrador used two patents that were originally issued to Theranos, the defunct blood-testing company whose former CEO is now facing criminal fraud charges. Following the public outcry around its patent threats, Labrador agreed to grant royalty-free licenses to those working on COVID-19 tests.  

Labrador wasn’t the only example of a patent owner misbehaving this year. Lawsuits by patent trolls went up during 2020. By mid-year, they were 20% higher than last year [pdf], and 30% higher than 2018. In May, we wrote about a patent troll called Swirlate IP that sued five different companies, including ResMed, a company that makes ventilators. Swirlate, a limited liability company based in a “Pack and Mail Shoppe” in a strip mall in Plano, Texas, is linked to IP Edge, a large patent assertion company owned by three IP lawyers. 

In an emergency situation, policymakers should have taken big steps to protect the innovators who were coming up with low-cost COVID tests from patent threats that would stifle their life-saving efforts. In March, we wrote about one of the most potentially powerful strategies: using 28 U.S.C. § 1498, a law that allows the government to use or authorize others to use patented technology and make itself, rather than private entities, the defendant in a related patent infringement lawsuit – along with limiting damages to reasonable compensation. Policymakers also could have embraced something like the Open COVID Pledge, to make sure that IP doesn’t become an obstacle to the creation and distribution of treatments for the disease.  

Unfortunately, that’s not the direction Congress went in. Pro-patent lobbyists and lawyers went back to work, suggesting the same old “solutions” they pushed before the crisis: even more patents. Some patent-system insiders are seeking to change the law to patent things that the Supreme Court has already said aren’t eligible, like tests that measure biological facts of what’s present in a human body. 

So lawmakers dabbled with ideas that would make the patent system worse, not better. This year, we advocated against a bill that would have actually extended the terms of some patents by 10 years, and a bill that would have authorized customs agents to seize products at the border based on design patents. Another bad bill, the “Inventor Rights Act,” would have created a special class of patents deemed by the government to be “inventor-owned,” then given them special privileges that make it easier to sue people and more harmful when they do. For instance, “inventor-owned” patents, which many patent trolls would qualify for, would have been able to avoid review processes at the U.S. Patent and Trademark Office (PTO) designed to filter out bad patents, as well as critical venue rules created by the U.S. Supreme Court to prevent patent plaintiffs from taking advantage of especially patent-friendly judicial districts.  

Two Wins for Transparency

Even though we were often on the defense this year, it isn’t all bleak. One bright spot: we won a major victory in our Uniloc case, which went up to the Federal Circuit.  

Uniloc is one of the most prolific patent trolls, having filed more than 170 patent infringement lawsuits in 2018 alone. In a case it filed against Apple, an important issue came up—did Uniloc even have the right to sue anyone? The documents were likely to shed light on Uniloc’s relationship with its litigation funder, Fortress Investment Group. Fortress was the same group that assisted with the Labrador lawsuits over Theranos’ old patents.

In July, the Federal Circuit upheld Judge William Alsup’s decision that Uniloc’s sealing request had been “grossly excessive.” The case continues, with a narrower dispute remaining over third-party confidentiality in a smaller set of documents, such as the identification of companies that have patent licenses from Uniloc. 

When it comes to transparency in patent litigation, the Uniloc case led unexpectedly to another victory, as well. Because oral arguments in the Federal Circuit appeal took place in April, during the coronavirus outbreak, the court held that it would have to be done remotely. We filed a motion asking for full public and media access to the telephonic hearing. The Federal Circuit embraced our request, providing real-time audio access to the public for the first time.  

Protecting Patent Reviews at the U.S. Patent and Trademark Office

Users and technologists have been harassed for years by patent trolls—typically, shell companies whose business boils down to patent threats and lawsuits, rather than creating any goods or services of their own. Patent trolling is a business that hurts small companies, is a drag on innovation, and can even harm our free speech rights

One of a few tools that make a small but noticeable dent in the harmful business of patent trolls is a process called inter partes review, or IPR. This process allows for specialized judges at the Patent Trial and Appeal Board to take a second look at patents, to see if they should have been granted in the first place. Through the IPR process, the office has canceled all claims on more than 2,000 patents in recent years, including many software patents asserted by litigious patent trolls. 

This year, the Patent and Trademark Office has reversed course. They’ve stopped instituting many IPR petitions for bureaucratic and procedural reasons, rather than carrying out the job as Congress intended. That’s why, together with several other groups supporting patent reform, we’ve asked senior Congressional representatives in both parties to apply more oversight to the PTO, and insist that it do the job it was told to do—perform inter partes reviews efficiently and according to the rules. 

The all-out attempt to wreck IPRs hasn’t stopped. In the final days of the Trump administration, PTO director Andrew Iancu is trying to push through rule changes that would weaken IPRs even further. Hundreds of you have spoken out against that proposal, which we hope will be set aside. 

In 2021, we’ll be on guard to safeguard elements of patent law, like IPRs and transparency in patent litigation, that have made the patent system a bit more fair for small business and technology users.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Joe Mullin

Competitive Compatibility: Year in Review 2020

2 months 1 week ago

2020 saw governments on three continents take action against the dominance of the biggest tech platforms, with a flurry of pro-competition rules, investigations and lawsuits. As exciting as this is, it's just the beginning. Antitrust enforcement is often a matter of years or even decades, and if history is any judge, dominant companies will seek to avert precedent-setting judgments (which might lead to breakups) and instead try to settle their lawsuits, negotiating concessions rather than taking their chances before a judge.

As an organization that yearns for the days when the Internet wasn't a group of five websites, each consisting of screenshots of text from the other four, we are excited to see this kind of enforcement underway, but that excitement is tempered by the fear that hasty, reflexive settlements and regulations might actually make the situation worse. For example, deputizing companies to perform the duties of the state by identifying and blocking unlawful speech makes it harder for new, better companies to enter the marketplace, because they can't afford the vast army of moderators that monopolists can pay for with the change they find down the back of the sofa in their campus mini-kitchens.

Interoperability Mandates

Thankfully, 2020 saw some very thoughtful approaches to competition that explicitly sought out ways to promote competition and give users more power over how their tech works, rather than blindly "punishing" companies with regulations that also raise barriers for would-be challengers. Both the U.S. Access Act and the EU Digital Services Act propose interoperability mandates—a requirement that the biggest tech companies provide managed access to their systems for startups, co-ops, and other potential competitors.

When it comes to undoing 40 years of indifferent antitrust enforcement, interoperability mandates are a great start, but they are only part of the solution. What happens if the mandatory system is sidelined by changing market conditions or deliberate subversion by monopolistic companies? And what about new technologies just a-borning: what can we do to prevent them from growing into monopolies, and starting the cycle over again?

The Good Kind of Tech Exceptionalism

There are lots of ways in which tech monopolies are unexceptional: just like many other concentrated industries, their growth was fueled by access to lots of capital, which allowed them to gobble up small companies before they could grow to be competitors; they engaged in mergers between notional competitors, making it harder for suppliers, customers, and workers to shop around for better deals; and they created vertical monopolies that cornered multiple markets.

But tech is exceptional, because it is based on universal machines (computers) and a universal network (the Internet), and that opens up the possibility of a different kind of interoperability—not the kind that is designed in from the start through standards, nor the kind imposed by a lawmaker.

Rather, this is adversarial interoperability, when you plug something in against the wishes of the person who made it, to make life better for the people who use it.

We have a name for this: Competitive Compatibility (ComCom for short).

ComCom

ComCom is deeply embedded in the history of tech. Every one of today's tech giants has a ComCom story in its history, like Apple's elegant solution to Microsoft's unreliable Office for Mac products.

But as scrappy firms become powerful incumbents, their relationship to ComCom changes: when they were using ComCom to outmaneuver the lumbering dinosaurs of a previous era, that was just “progress.” When some upstart comes along to do the same to them, that's a dangerous and illegitimate challenge to the natural order of things.

When it comes to getting their own way, monopolies have two major advantages: first, they have the extra profits that come from monopolization ("monopoly rents" in economics jargon); and second, the more concentrated an industry is, the easier it is for all the major companies in it to collude to spend those monopoly rents to buy policies that safeguard their profits.

For decades, the pirates of yore have declared themselves to be admirals and have set about ensuring that no one does unto them what they have done unto so many others. To that end, we've seen the steady expansion of cybersecurity law, copyright and para-copyright law, and patent law (as well as frightening new theories of copyright that make it much harder to use ComCom to help users escape walled gardens, lock-ins, and other anti-competitive tricks that help monopolists do monopoly.)

Telling the Tale

We spent much of 2019 and 2020 gathering and publishing the largely forgotten histories of ComCom in tech development, from the early cable TV systems all the way to the birth of fintech.

Now it's time to mobilize. As 2021's crackdowns on monopoly proceed, lawmakers, courts, and regulators will be looking for solutions. We think ComCom should be in their toolkits: we imagine courts correcting monopoly abuses by prohibiting big companies from using legal tools to fight ComCom, and governments insisting on guarantees that ComCom will be tolerated by anyone selling products to the public sector. We’d also like to see legislatures adopt rules to promote ComCom, including reforms to copyright, cybersecurity, patent, and other laws that safeguard those who make new things that connect the old.

It's nearly 2021, and long past time for a revival of the trustbusting spirit. But it's also been 130 years since America's seminal antitrust laws were passed, and so it's also long past time to adopt some new tactics to fight the eternal problem of monopoly abuse.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Cory Doctorow

EFF’s Work in State Legislatures: Year In Review 2020

2 months 1 week ago

EFF works in state legislatures across the country to fight for your civil liberties. This year, the pandemic upended the priorities and plans of every statehouse. But, with your support, EFF was able to quickly respond to surveillance threats, defend privacy, and counter the ways the pandemic made matters worse.  Here are some highlights of our work—both in California and beyond.

Fighting Government Face Surveillance

After 2018 and 2019, which saw cities including San Francisco and Oakland passing municipal bans on government use of face recognition, many state lawmakers took notice of the growing momentum around this dangerous technology.

Unfortunately, not all of their ideas were good.

In California, EFF joined a broad coalition of civil liberties, civil rights, and labor advocates to oppose A.B. 2261, which proposed weak regulation of face surveillance. Modeled after a similar bill enacted in Washington state—a measure EFF and other civil liberties groups opposed—this bill threatened to normalize the increased use of face surveillance of Californians where they live and work.  Our allies included the ACLU of California, Oakland Privacy, the California Employment Lawyers Association, Service Employees International Union (SEIU) of California, and California Teamsters. This wide-ranging group illustrated how many people recognize the threat of face surveillance to their communities.

We continue to work with lawmakers across the country who are pushing for bills that would end government face surveillance at the state and city level. And we encourage communities across the country to join this movement with our About Face campaign. Take this opportunity to advocate for the end of government use of this harmful technology in your own neighborhoods.

Standing Up for Strong Privacy

Consumer data privacy continued to be a major focus for legislators and voters in 2020, both as it related to the pandemic and as a more general issue. The California Consumer Privacy Act (CCPA) went into effect at the start of the year. More changes are coming because California’s voters enacted Proposition 24, which amended the CCPA. EFF did not take a position on that mixed bag of steps forward, steps back, and missed opportunities. Proposition 24 goes into effect in 2023, and CCPA remains the law until then. We will work to ensure pro-privacy implementation of Proposition 24, and continue to fight for strong privacy laws.

There are a lot of weak privacy bills out there. EFF again opposed Washington’s “Privacy Act,” which failed to pass the state’s legislature for the second year in a row. The bill received widespread support from big tech companies. It’s no wonder they like this weak, token effort at reining in corporations’ rampant misuse of personal data. We expect privacy to be at issue again in Washington, where lawmakers are strongly influenced by tech companies in their backyard.

Back in California, EFF stood against A.B. 2004, which would have laid the groundwork for an ill-considered blockchain-based system for “immunity passports.” Specifically, the bill directed the state to set up a verified health credential that shows the results of someone’s last COVID-19 test, for purposes of restricting  access to public places. EFF believes that people should not be forced to present health data on their smartphones to enter public places. By claiming that blockchain technology was part of a unique solution to the public health crisis, A.B. 2004 was opportunism at its worst. We were proud to stand against this bill with allies including Mozilla and the ACLU of California. Governor Gavin Newsom vetoed it.

We also called on Gov. Newsom to add necessary privacy protections to any pandemic response program. The California legislature failed to pass a pair of bills (A.B. 660 and A.B. 1782) that would have instituted important privacy guardrails on contact tracing programs. We will continue to work on this issue in the coming year.

Expanding Broadband Access

EFF was proud to co-sponsor California S.B. 1130, by Sen. Lena Gonzalez, which would have paved the way for state-financed networks with bandwidth to handle Internet traffic for decades to come. Expanding broadband access, particularly fiber, is key. Fiber-to-the-home is the best option for California’s future, as EFF explained in a filing to the California Public Utilities Commission.

S.B. 1130 passed the Senate 30-9 and had the support of Gov. Newsom. Yet, with just hours left in this year’s legislative session, the California Assembly refused to hear SB 1130, or any deal, to expand broadband access—without any explanation to the more than 50 groups that supported this bill.

But we’re not giving up. In fact, we’ve continued to build our coalition demanding greater broadband access. On December 7, the first day of the California legislature’s new session, Sen. Gonzalez filed S.B. 4, which shares the same core principles as S.B. 1130.

If you are a California business, non-profit, local elected official, or anchor institution, please sign on in support of S.B. 4. EFF will include you in a letter to update the legislature of how wide and deep current support is for this legislation. Dozens of organizations and elected officials have endorsed this bill with more to come. Join us!

We’re talking to lawmakers in other states who are also interested in overhauling their Internet infrastructure. The weaknesses of many networks have become crystal clear as more people work from home and attend school entirely online.

Looking Ahead

The pandemic and the many ways it affects digital liberties will continue to be a main focus in many legislatures. EFF will continue to work with state lawmakers across the country to enact laws to expand consumer protections and access to critical technologies, to ensure that civil liberties are part of any pandemic response, and to oppose federal efforts to take away rights in states that have passed strong laws.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Hayley Tsukayama

Student Privacy and the Fight to Keep Spying Out of Schools: Year in Review 2020

2 months 1 week ago

As students were sent home from school in the spring due to the coronavirus pandemic, schools  followed them home with invasive surveillance technology. This trend, spurred by the surge in remote learning, was an opportunistic move by tech companies and schools already in a race to control students through technology.

With millions of students studying from home to stay safe from COVID-19, new threats have popped up.

The Student Surveillance Ecosystem Pre-Pandemic

Before the pandemic, the school panopticon toolkit was already wide-ranging. Many schools relied on cameras and microphones installed in buildings to watch students go about their day. The cameras might be equipped with facial recognition; the microphones might have “aggression detection” capabilities. Facial recognition is a biased technology, and cities have started banning government use of face surveillance because of this issue. Aggression detection technology simply doesn’t work.

Some software scans students’ social media posts, both during and after school hours. Schools can even track students’ personal devices (as opposed to school-issued), by requiring the use of a certain kind of security certificate to use the school Internet, thus giving administrators the ability to monitor browser history and messages students send. These technologies cause real harm, including disproportionately impacting students of color and causing mental health issues. And knowing they might be punished for speaking up— like the Georgia student suspended for posting about inadequate coronavirus mitigation measures—is inherently chilling to students’ freedom of expression.

In response to this encroachment of surveillance into schools, EFF created a Surveillance Self-Defense Guide written especially for students. It describes the technologies that students can be subject to, the risks they pose, and how to minimize those risks—and how to make the case to parents, teachers, and school administrators that spy tech doesn’t belong in a place of learning.

This was already an Orwellian situation before the pandemic. Now, with millions of students studying from home to stay safe from COVID-19, new threats have popped up.

The Turbocharging of Remote Proctoring

Remote proctoring refers to a class of monitoring technology that spies on students as they complete exams. It is incredibly invasive, often uses facial recognition software and AI monitoring, collects massive amounts of sensitive data (including in some cases biometric information), and scrutinizes students every facial expression and movement for signs of academic dishonesty. Proctoring software can also have biases against students that do not fit the presumed white, neurotypical, and able-bodied "norm," further exposing the most vulnerable students to harm. Ultimately, these apps cannot keep their promise to stop cheating. Students will always be able to undermine these tools, making this technology merely a further normalization of surveillance in education.

These apps subject students to unnecessary and invasive surveillance, and EFF has been proud to stand with students on this issue. We’ve objected to the California Bar’s required use of ExamSoft in the bar exam and won a partial victory when–shortly after receiving our letter–the Clerk and Executive Officer of the California Supreme Court asked the state bar to propose a timetable within 60 days for the deletion of all the 2020 bar applicants’ personally identifiable information collected via ExamSoft. When ExamSoft flagged over a third of all online test-takers, we pushed for the Bar to give examinees additional time and information necessary to defend themselves against the many likely baseless accusations of cheating. And when five U.S. senators began investigating these apps, we reminded them that the entire business model of proctoring companies is surveillance of students, and that you can’t make spying less invasive.

University App Mandates

Some universities have mandated that students install COVID-19-related technology on their personal devices as a condition for returning to campus or enrolling in classes. EFF has been clear that this is the wrong call. Exposure notification apps, quarantine enforcement programs, and similar new technologies are untested and unproven, and mandating them risks exacerbating existing inequalities in access to technology and education. Schools must remove any such mandates from student agreements or commitments, and further should pledge not to mandate installation of any technology. EFF is urging universities to rethink these mandates and commit to our University App Mandate Pledge: six transparency and privacy-enhancing policies that university officials must adopt to protect the privacy, security, and transparency of their community members. Students, staff, faculty, and university community members can speak up here.

EFF will continue to stand for student privacy. Whether it’s creating resources like our Privacy for Students guide, continuing to write about emerging student privacy issues, or teaching journalism graduate students how to think and write about data privacy issues that affect students, we’ll be here to fight for and reassure students: invasive surveillance is not normal and it has no place in your school. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Lindsay Oliver

Section 215 Expired: Year in Review 2020

2 months 1 week ago

On March 15, 2020, Section 215 of the PATRIOT Act—a surveillance law with a rich history of government overreach and abuse—expired due to its sunset clause. Along with two other PATRIOT Act provisions, Section 215 lapsed after lawmakers failed to reach an agreement on a broader set of reforms to the Foreign Intelligence Surveillance Act (FISA).

In the week before the law expired, the House of Representatives passed the USA FREEDOM Reauthorization Act, without committee markup or floor amendments, which would have extended Section 215 for three more years, along with some modest reforms. 

As any cartoon viewer knows, in order for any bill to become law, the House and Senate must pass an identical bill, and the President must sign it. That didn’t happen with the USA FREEDOM Reauthorization Act. Knowing that House’s bill would fail in the Senate, Senate Majority Leader Mitch McConnell brought a bill to the floor that would extend all the expiring provisions for another 77 days, without any reforms at all. Senator McConnell's extension passed the Senate without debate.

But the House of Representatives left town without passing Senator McConnell’s bill. That meant that Section 215 of the USA PATRIOT Act, along with the so-called lone wolf and the roving wiretap provisions expired. Section 215 is best known as the law the intelligence community relied on to conduct mass surveillance of Americans’ telephone records, a program held to be likely illegal by two federal courts of appeals. It has other, largely secret uses as well.

 But is it dead?

 Although Section 215 and the two other provisions have expired, that doesn’t mean they’re gone forever. For example, in 2015, during the debate over the USA FREEDOM Act, these same provisions were also allowed to expire for a short period of time, and then Congress reauthorized them for another four years. While transparency is still lacking in how these programs operate, the intelligence community did not report a disruption in any of these “critical” programs at that time. If Congress chooses to reauthorize these programs early in the new Congress, this lapse in 2020 may not have much of an overall impact.

In addition, the New York Times and others have noted that Section 215’s expiration clause contains an exception permitting the intelligence community to use the law for investigations that were ongoing at the time of expiration or to investigate “offenses or potential offenses” that occurred before the sunset. Broad reliance on this exception would subvert Congress’s will when it repeatedly included sunset provisions to cause Section 215 to expire, and the Foreign Intelligence Surveillance Court should carefully—and publicly—circumscribe any attempt to rely on it.

EFF has repeatedly argued that if Congress can’t agree on real reforms to these problematic laws, they should be allowed to expire and stay that way. While we are pleased that Congress didn't mechanically reauthorize Section 215, it is only one of a number of largely overlapping surveillance authorities. And with a new Congress and a new Administration, the House and the Senate should take this unique opportunity to learn more about these provisions and create additional oversight into the surveillance programs that rely on them. The expired provisions should remain expired until Congress enacts the additional, meaningful reforms we’ve been seeking.

To be clear, even the permanent loss of the current version of the law will still leave the government with a range of tools that are still incredibly powerful. These include other provisions of FISA as well as surveillance authorities used in criminal investigations, many of which can include gag orders to protect sensitive information.

But allowing Section 215 and the other provisions to expire in 2020 means that Congress has the opportunity to discuss whether these authorities are actually needed, without the pressure of a ticking clock.

You can read more about what EFF is calling for when it comes to reining in NSA spyingreforming FISA, and restoring Americans’ privacy here.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

 

Related Cases: Jewel v. NSA
India McKinney

EFF Members Rise Up: 2020 Year in Review

2 months 1 week ago

These days it's easy to feel adrift from the people in your life. At times, physical distance alongside social and political unrest seems like a never-ending rising tide. It can feel overwhelming. We have felt it here at EFF, but thankfully during our 30th Anniversary, EFF's purpose has never been more clear. EFF's members have shown that even through the worst of times, we will still come together to fight against police surveillance, defend the use of strong encryption, and protect our rights to free speech on the Internet (to name just a few of this year's biggest battles).

EFF members didn't skip a beat and proved the strength of their numbers this year. Last spring, the EFF membership team was tasked with planning the first-ever virtual Members' Speakeasy. Despite being an organization whose entire purpose is to fight for digital rights, we had to climb a steep learning curve to throw a successful program in virtual space! Thankfully, our members showed that they are ready and willing to fight for our mission, joining a live workshop to research and collect data about American police surveillance technologies. This event helped launch our Atlas of Surveillance database, which aims to raise awareness about the surveillance technologies that law enforcement agencies have in your neighborhood.

It doesn't stop there. July 10th marked EFF's 30th Anniversary of supporting you in the fight for a better digital future. We knew that we wanted to do something big. To mark the occasion, EFF presented a seven-hour live-streamed event that included: DJ performances, our first EFF30 Fireside chat discussing the future of encryption, video game streams, and even our 4th Annual Tech Trivia where viewers could test the limits of their nerdiness with the contestants. This anniversary stream was bigger and more fun than we could have ever imagined. We're grateful to EFF's members for showing up for the Internet and celebrating with us—even in cyberspace.

The summer is one of EFF's busiest times of year with staff members at hacker conferences including HOPE, BSides Las Vegas, Black Hat USA, and DEF CON. In fact, the passionate supporters at these events can raise the money to literally fund one EFF lawyer and one activist for a year. Even without the typical throngs of people crowding the meeting halls of New York and Las Vegas, EFF supporters rose to the challenge of keeping our team going strong.

EFF launched a limited-edition DEF CON member t-shirt online for the first time, featuring an appropriately glitched-out version of the globe with a pop-up menu asking to reboot 2020. This t-shirt also included EFF's hardest-ever hidden puzzle (Seriously. Try it!). We're in awe of the incredible support from members during these conferences, and we're thankful for the chance to connect with so many virtually. It was an important reminder that we're not alone, and that we'll find strength in each other.

Now as 2020 comes to a close, people around the world continue showing their support for digital rights. We have relied on digital connections more than ever this year, and it has brought dangerous currents—like surveillance, censorship, lack of digital access, and much more—closer to the surface. That makes EFF's mission to preserve our online privacy, security, and free expression rights crucial.

Thank you to all of the EFF members who joined forces with us and kept the fight for Internet freedom strong during this challenging year. Our successes are only possible with help from people just like you. If you haven't joined EFF yet, now is a great time to do it!

Join EFF Now

help EFF unlock special grants in our Year-End Challenge!

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Christian Romero

In 2020, Congress Threatened Our Speech and Security With the “EARN IT” Act

2 months 1 week ago

One nice thing about democracy is that—at least in theory—we don’t need permission to speak freely and privately. We don’t have to prove that our speech meets the government’s criteria, online or offline. We don’t have to “earn” our rights to free speech or privacy.

Times have changed. Today, some U.S. senators have come to the view that speech in the online world is an exceptional case, in which website owners need to “earn it”—whether they intend to carefully moderate user content, or let users speak freely. In 2020, two Senators introduced a bill that would limit speech and security online, titled the EARN IT Act, which is also an acronym that stands for “Eliminating Abusive and Rampant Neglect of Interactive Technologies.”  

Using crimes against children as an excuse to blow a hole in critical legal protections for online speech, Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) co-sponsored this law. The original EARN IT Act created a 19-person government commission, stacked with seats reserved for law enforcement, that would create “best practices” for online platforms to follow. This wouldn’t have just targeted big websites like Facebook—the new rules would apply to a local news websites, hobby blogs, and email services, among other online services. Anyone who didn’t follow the “best practices” would lose critical legal protections and could be held liable, or prosecuted, for the actions of the people who use their services. 

It’s clear what practices law enforcement want Internet companies and website hosts to adopt. U.S. Attorney General William Barr has said it repeatedly—in his view, law enforcement agencies should always have access to encrypted communications. But as we’ve explained over and over again, encryption with a “backdoor” is just broken encryption. It doesn’t matter if you call the means of accessing encryption “client side scanning” or “endpoint filtering” or anything else. Backdoors don’t just get used by good guys, either. Authoritarian governments and criminals are always interested in reading other peoples’ messages. 

The structure of the EARN IT Act was designed to allow Internet speech to be monitored by law enforcement. It would have let law enforcement agencies, from the FBI down to local police, to scan every message sent online. Any company that didn’t grant law enforcement legal access to any digital message (a “best practice” sure to have been mandated by the commission) would be subject to lawsuits or even criminal prosecutions. If it had passed, the bill would have been devastating to both privacy and free expression. That’s because without liability protections, websites will censor and regulate user speech, or even eliminate certain categories for speech altogether. 

But the original bill didn’t even advance out of committee—because public outcry against it was overwhelming. The law as proposed was so unpopular, the EARN IT Act was amended to reduce the power of the government commission. Rather than creating an authoritative law enforcement-dominated commission, the new version of the EARN IT Act establishes the same commission, but simply makes it “advisory.” Instead, the bill gives the power to regulate the Internet to state legislatures. 

This amended EARN IT Act isn’t any better than the original, and in some ways is even worse. It gives wide berth to legislatures in all 50 states, as well as U.S. territories, to regulate the Internet in just about any way they want—as long as the nominal purpose is to stop the online abuse of children. The Senate Judiciary Committee also passed an amendment that purported to protect end-to-end encryption from being violated by the states. That amendment was a worthy nod to the outpouring of concern for the fate of encryption that this surveillance bill had prompted, but it doesn’t go far enough. 

Section 230 Will Be Flogged Until Morale Improves

Looking beyond encryption, the other real target here was Section 230 (47 U.S.C. §230), a law that has been falsely maligned as providing special protection to Big Tech companies. In reality, Section 230 protects the speech and security of everyday Internet users.

Lawmakers in Congress filled the final months of 2020 with ideas about how to weaken Section 230, a federal law that is simply not broken. The PACT Act was an effort to address the dominance of the biggest online platforms, but would end up censoring users and entrenching the dominance of the biggest platforms. Another late 2020 proposal, the Online Content Policy Modernization Act, was simply an unconstitutional mess

These ideas are going to keep coming back in 2021. There’s justifiable criticism of Big Tech, but it’s led many politicians into misguided attempts to control online speech, or give police more power to control and surveil the Internet. If a wrongheaded proposal to gut Section 230 passes, the collateral damage could be severe.  

We all know there are serious problems in the online world, like disinformation, hateful speech, and eroded privacy. By and large, the big problems aren’t related to Section 230, which still fulfills its goal of protecting user speech. The big problems exist because a few tech companies have too much power. Giant tech companies act increasingly like monopolies. Their content moderation systems are broken

We do need new solutions to these problems. And they’re out there: we need to think about beefing up antitrust enforcement, reforming anti-competitive laws like the CFAA, creating strong privacy regulations, freeing user data by making it portable, and creating opportunities for competitive compatibility.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Joe Mullin

What Comes Next for the Santa Clara Principles: 2020 in Review

2 months 1 week ago

For many years, we have urged platforms to operate with more transparency—both to the public and to their users—and to ensure that the people who use their services have the ability to appeal wrongful content moderation decisions. As such, in conjunction with several other organizations and academic experts, we launched the Santa Clara Principles on Transparency and Accountability in Content Moderation in February 2018 on the sidelines of an event on content moderation at Santa Clara University to make our demands clear to companies. 

Later that same year, we worked with a group of more than one hundred organizations from dozens of countries to send a strong message to Facebook CEO Mark Zuckerberg, reminding him that much of the world’s ability to speak freely is in his hands, and urging him to ensure that Facebook offer appeals in every circumstance. That campaign was a success: Not only did Facebook respond to our letter, but they broadened the right to appeal to most cases, with a handful of exceptions.

From that action, we also began developing a loose coalition of other experts—NGOs, academics, and journalists—engaged broadly in the topic of platform governance, and have continued (with the help of our allies) to grow that group and broaden collaboration in the field.

In 2019, we succeeded in getting a dozen companies to endorse the Principles, with several companies furthering their compliance. One company, Reddit, went all the way in implementing the Principles into their platform.

In the meantime, we heard from our allies across the world that there was much more we could be doing, and so we collaborated with a number of our friends in the field to embark on an ambitious process of reviewing the Principles, which included an open call for submissions. We received more than forty sets of recommendations from more than ten countries, and have been reviewing them along with a small subset of our allies: Global Partners Digital, Open Technology Initiative, Article 19, Center for Democracy and Technology, Ranking Digital Rights, ACLU of Northern California, Witness, Brennan Center for Justice, Red en Defensa de los Derechos Digitales, and AccessNow.

We’re excited for what comes next: In the next few months, we will be working with those groups to finish reviewing the submissions, considering the recommendations, and publishing a report. And we look forward to sharing what we’ve heard—after all, it is the users around the world whose voices must be taken into account by platforms.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2020.

Jillian C. York
Checked
38 minutes 33 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed