COVID-19 Tracking Technology Will Not Save Us

2 months 3 weeks ago

Technology may be part of the solution to stopping the spread of COVID-19, but apps alone will not save us. As more states develop COVID exposure notification apps, institutions and the people they serve should remain skeptical and remember the bigger picture. This is still experimental, unproven technology, both in terms of how it works under the hood and how humans will interact with it. And even the best-designed app will be no substitute for public health basics like widespread testing and interview-based contact tracing.

On top of that, any benefits of this technology will be unevenly distributed. Any app-based or smartphone-based solution will systematically miss the groups least likely to have a cellphone and more at risk of COVID-19 and in need of resources: in the United States, that includes elderly people, people without housing, and those living in rural communities. 

Ultimately, exposure notification technology won’t bail out poor planning or replace inadequate public health infrastructure, but it could misdirect resources and instill a false sense of safety.

Unproven Technology

Exposure notification apps, most notably those built on top of Apple and Google’s Exposure Notification API, promise to notify a smart phone user if they have been in prolonged close contact—for instance, within 6 feet for at least 15 minutes—with someone who has tested positive for the virus. The apps use smartphones’ Bluetooth functionality (not location data) to sense how far away other phones are, and store random identifiers on the user’s device.

But Bluetooth was not made to assist with contact tracing and other public health efforts, and the differing hardware properties of various phones can make it hard to consistently measure distances accurately.

On top of technical shortcomings, it is also not yet clear how people will interact with the technology itself. How are people likely to react to a phone notification informing them that they were exposed to someone with COVID? Will they ignore it? Will they self-isolate? If they seek testing, will it be available to them? Public trust is fragile. A high rate of false negatives (or a perception thereof) could lead to people relaxing measures like social distancing and masking, while false positives could lead to users ignoring notifications.

Unevenly Distributed Benefits

Any app that promises to track, trace, or notify COVID cases will disproportionately miss wide swaths of the population. Not everyone has a mobile phone. Even fewer own a smartphone, and even fewer still have an iPhone or Android running the most up-to-date operating system. Some people own multiple phones to use for different purposes, while others might share a phone with a family or household. Smartphones do not equate to individuals, and public health authorities cannot make critical decisions—for example, about where to allocate resources or about who gets tested or vaccinated—based on smartphone app data.

In the U.S., smartphone ownership is only 80% to begin with. And the communities least likely to have a smartphone in the U.S.—such as elderly people or homeless people—are also the ones at higher risk for COVID-19. For people 65 years or older, for example, the rate of smartphone ownership declines to about 50%. Out-of-date smartphone hardware or software can also make it harder to install and use a COVID tracking app. For Android users in particular, many older phone models stop getting updates at some point, and some phones run versions of Android that simply don’t get updates.

And smartphone penetration data and specs do not tell the whole story, in the U.S. or internationally. Overbroad surveillance hits marginalized communities the hardest, just as COVID-19 has. In addition to potentially directing public health resources away from those who need them most, new data collection systems could exacerbate existing surveillance and targeting of marginalized groups.

Technology Will Not Be a Magic Bullet

Even with these drawbacks, some will still ask, “So what if it’s not perfect? Anything that can help fight COVID-19 is good. Even if the benefit is small, what’s the harm?” But approaching exposure notification apps and other COVID-related tracking technology as a magic bullet risks diverting resources away from more important things like widespread testing, contact tracing, and isolation support. The presence of potentially helpful technology does not change the need for these fundamentals.

Relying on unproven, experimental technology can also lead to a false sense of safety, leading to a moral hazard for institutions like universities that have big incentives to reopen: even if they do not have, for example, enough contact tracers to reopen, they might move ahead anyway and rely on an app to make up for it. 

If and when public health officials conclude that spread of COVID-19 is low enough to resume normal activities, robust interview-based contact tracing is in place, and testing is available with prompt results, exposure notification apps may have a role to play. Until then, relying on this untested technology as a fundamental pillar of public health response would be a mistake.

Gennie Gebhart

Pass the Payment Choice Act

2 months 3 weeks ago

A growing number of retail businesses are refusing to let their customers pay in cash. This is bad for privacy. Higher-tech payment methods, like credit cards and online payment systems, often create an indelible record of what we bought, and at what time and place. How can you stop data thieves, data brokers, and police from snooping on your purchase history? Pay in cash.

Stores with “cash not accepted” policies are also unfair to the millions of Americans who are unbanked or underbanked, and thus lack the ability to pay without cash. This cohort disproportionately includes people of color and people with lower incomes. Stores that require high-tech payment methods discriminate against people without access to that tech.

So EFF supports the Payment Choice Act (S. 4145, H.R. 2650), a federal bill sponsored by Senators Kevin Cramer and Robert Menendez and Representative Donald Payne. It would require retail stores to accept cash from in-person customers. The bill ensures effective enforcement with a private right of action.

We proudly signed a coalition letter in support of the Payment Choice Act. This effort, organized by Consumer Federation of American and Consumer Action, is joined by 51 consumer, privacy, and civil rights advocacy organizations. The letter explains:

Unbanked consumers have little access to noncash forms of payment. Without a bank account, they are unable to obtain credit or debit cards or to use other noncash payment methods, with the possible exception of prepaid cards. Furthermore, when consumers are forced to pay for goods and services in cashless transactions, they (as well as the businesses where they shop) are also often forced to incur added expenses in the form of network and transaction fees.

Furthermore, noncash transactions generate vast amounts of data, recording the time, date, location, amount, and subject of each consumer’s purchase. Those data are available to digital marketers and advertisers who are engaged in developing and refining increasingly sophisticated techniques to identify and target potential customers. Paying with cash provides consumers with significantly more privacy than do electronic forms of payment.

While some have expressed concerns about accepting cash in the midst of the pandemic, others can only pay in cash, and we cannot allow our current crisis to solidify discrimination permanently. Congress should protect cash payment.

Adam Schwartz

EFF Pilots an Audio Version of EFFector

2 months 3 weeks ago

Today, we are launching an audio version of our monthly-ish newsletter EFFector to give you a new way to learn about the latest in online freedom, and offer greater accessibility to anyone who is visually impaired or would just like to listen!

Listen Now!

Surveillance Shouldn’t Be a Prerequisite for an Education - EFFector 32.25

Since 1990 the Electronic Frontier Foundation has published EFFector to help keep readers up to date on digital rights issues. The intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is jam-packed with links to updates, announcements, blog posts, and other stories to help keep readers up to date on the movement to protect online privacy and free expression.

The audio version of EFFector is a reading of our newsletter, made to bring EFF issues to more people in a new way. Be sure to listen on our YouTube channel just a few days after we send out the newsletter. 

Keep in mind that EFFector is just a summary—if you like the stories you hear or read in EFFector, check out the full issue, along with its links to full Deeplinks blog, press releases, and news stories. Just click any of the links in the newsletter to read all of the full stories. Don't forget—you can always subscribe to receive EFFector by email or listen on EFF's YouTube channel and at the Internet Archive.

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Cryptographer and Entrepreneur Jon Callas Joins EFF as Technology Projects Director

2 months 3 weeks ago

Some of the most important work we do at EFF is build technologies to protect users’ privacy and security, and give developers tools to make the entire Internet ecosystem more safe and secure. Every day, EFF’s talented and dedicated computer scientists and engineers are creating and making improvements to our free, open source extensions, add-ons, and software to solve the problems of creepy tracking and unreliable encryption on the web.

Joining EFF this week to direct and shepherd these technology projects is internationally-recognized cybersecurity and encryption expert Jon Callas. He will be working with our technologists on Privacy Badger, a browser add-on that stops advertisers and other third-party trackers from secretly tracking users’ web browsing, and HTTPS Everywhere, a Firefox, Chrome, and Opera extension that encrypts user communications with major websites, to name of few of EFF’s tech tools.

Callas will also bring his considerable crypto and security chops to our policy efforts around encryption and securing the web. In the last two decades he has designed and built core cryptographic and concurrent programming systems that are in use by hundreds of millions of people.

As an entrepreneur, Callas has been at the center of key security and privacy advancements in mobile communications and email—the best-known of which is PGP (Pretty Good Privacy), one of the most important encryption standards to date. He was chief scientist at the original PGP Inc., and co-founded PGP Corp. in 2002. Later, Callas was chief technology officer at Internet security company Entrust, and co-founded encrypted communications firm Silent Circle, where he led teams making apps for encrypted chat and phone calls, including secure conference calls and an extra-secure Android phone called Blackphone.

Callas also did a couple of stints at Apple, where he helped design the encryption system to protect data stored on the Mac, and led a team that hacked new products to expose vulnerabilities before release. Along the way, he has garnered extensive leadership experience, having managed large engineering teams. In 2018, Callas left the corporate world to focus on policy as a technology fellow at the ACLU. In July he took aim at fatal flaws in the UK’s proposal to force service providers to give the government “exceptional access” to people’s encrypted communications (in other words, let the government secretly access private, encrypted messages).

The proposal’s authors denied the plan would “break” encryption, saying it would merely suppress notifications that the government happened to be accessing communications that people believe are secure, private, and free of interception. As Callas wrote at the ACLU, “a proposal that keeps encryption while breaking confidentiality is a distinction without a difference.”

We couldn’t agree more. EFF has fought since its founding 30 years ago to keep the government from breaking, or forcing others to break, encryption, and for people’s right to private and secure communications. We’re proud to have such a talented and passionate advocate for these principles on our team. Welcome, Jon!

 

 

 

 

Karen Gullo

It’s Past Time for Coinbase to Issue Transparency Reports

2 months 3 weeks ago

EFF has become increasingly concerned that payment processors are being asked to turn over information on their customers, without any mechanism for the public to know who is making those requests, or how often. That’s why we are calling on Coinbase—one of the largest cryptocurrency exchanges in the country—to start releasing regular transparency reports that provide insight into how many government requests for information it receives, and how it deals with them. These are difficult decisions with serious consequences, and they should not be made in the dark.

Cryptocurrency exchanges should especially understand the importance of the privacy of this information

Financial data can be among the most sensitive types of information we produce. How you spend your money can reveal a lot about your daily habits, the causes you care about, who you hang out with, and where you go. Choosing to comply with or reject a government request for this user data—or choosing to shut down an account—can have a huge impact on what types of speech can thrive online. Transparency reports are important tools for accountability for companies that make these important decisions. Cryptocurrency exchanges should especially understand the importance of the privacy of this information, as their users tend to prize both the cash-like anonymity of cryptocurrency, and its inherent resistance to censorship. For these reasons, cryptocurrency transactions are often sensitive—and more likely to carry with them an expectation of privacy.

At least one of Coinbase’s competitors, Kraken, has already recognized the importance of being open on this topic, and publicly released information on global law enforcement requests it receives. Providing this accountability is particularly important when it comes to financial data, as they can often be turned over with a subpoena, a 314 (a) request, or a National Security Letter— none of which require review from a judge before being sent to the financial service provider. And the need for cryptocurrency companies such as Coinbase to be open with consumers is only growing, as courts have not sided with consumer privacy when it comes to these requests.

As we wrote in July, in U.S. v. Gratkowski, the U.S. Court of Appeals for the Fifth Circuit ruled that law enforcement does not need to get a warrant in order to obtain financial transaction data from cryptocurrency exchanges—in this instance, the exchange was Coinbase. In that case, the court relied on the third-party doctrine. This doctrine holds that when people use services such as banks, they lose their reasonable expectation of privacy in the information that they turn over to a third party.

The third-party doctrine, and the court’s reliance on it in Gratkowski, is wrong. Storing your data with a third party should not mean that users lose any reasonable expectation of privacy. That makes no sense in a world where everyone navigates through their daily life by relying on services such as email that provide third parties with access to sensitive information.

But we do not know how many other cases are out there like Gratkowski, which is why we need more transparency from Coinbase. In providing the public with data on how often law enforcement seek user data and how often services comply, transparency reports show whether companies are living up to their promises to protect user privacy. They also serve an important secondary role by providing details on government surveillance activities.

As one of the largest individual companies in the U.S. cryptocurrency market, Coinbase wields tremendous power and influence over this dynamic. It should stand up for its users and also use its market power and influence to show others that transparency reports are an industry standard for all cryptocurrency exchanges. Releasing a transparency report would be one way for Coinbase to display leadership and fill in the gaps in our current knowledge, by simply shining a much-needed light on government requests for information.

Hayley Tsukayama

How California’s Assembly Killed The Effort to Expand Broadband for All Californians

2 months 3 weeks ago

California is facing a broadband access crisis, as parents are relying more on the Internet every day trying to keep their jobs in the midst of the pandemic while remotely educating their kids. The people of California need help, and the state should move forward now to begin the work needed to finally close the digital divide. Yet with just hours left in this year’s legislative session, the California Assembly refused to hear SB 1130, or any deal, to expand broadband access—a refusal that came out of the blue, without any explanation to the more than 50 groups that supported this bill. And that kind of blockade is only possible at the direction of California’s Speaker of the Assembly, Speaker Anthony Rendon.

A deal to expand broadband would have secured more than 100 million dollars a year to secure access to high-speed Internet for families, first responders, and seniors across the state. Senator Lena Gonzalez built a broad coalition of support for this bill, and had the support of the California Senate and Governor Gavin Newsom.

As Sen. Gonzalez said in a press release on the bill, “During this crisis, children are sitting outside Taco Bell so they can access the Internet to do their homework, but the Assembly chose to kill SB 1130, the only viable solution in the state legislature to help close the digital divide and provide reliable broadband infrastructure for California students, parents, educators, and first responders in our communities.”

Yet the Assembly insisted on poison pill amendments that support big industry instead of California residents and families. Despite your hundreds of phone calls and letters of support for this bill, the Assembly failed to do what’s right by the people of California this session.

We won’t stop fighting. EFF was proud to co-sponsor this bill with Common Sense Media, and will continue to explore all options to get the state to address this problem in the coming months and next session. Why? Because we, too, believe that every student should have an Internet connection at least as good as the Taco Bell down the street.

Playing Politics With Necessities

SB 1130 was in a strong position heading into the Assembly. The California Senate on June 26, 2020, voted 30-9 to pass the bill, giving its stamp of approval to update the California Advanced Services Fund (CASF) to expand its eligibility to all Californians lacking high-speed access. The bill paved the way for state-financed networks that would have been up to handling Internet traffic for decades to come, and would have been able to deliver future speeds of 100 mbps for download and upload without more state money.

The pandemic has exposed how badly a private-only approach to broadband has failed us all. The Assembly failed us, too.

Under the current law, only half of Californians lacking high-speed access are eligible for these funds, which also only requires ISPs to build basic Internet access at just 10 mbps for download and 1 mbps for upload. This is effectively is a waste of tax money today because it does not even enable remote work and remote education. Recognizing this, Senate leadership worked to address concerns with the bill, and struck a nuanced deal to:

  1. Stabilize and expand California’s Internet Infrastructure program, and allow the state to spend over $500 million on broadband infrastructure as quickly as possible with revenues collected over the years
  2. Enable local governments to bond finance $1 billion with state support to secure long-term low interest rates to directly support school districts
  3. Build broadband networks at a minimum of 100 mbps download, with an emphasis on scalability to ensure the state does not have to finance new construction later
  4. Direct support towards low-income neighborhoods that lack broadband access
  5. Expand eligibility for state support to ensure every rural Californian receives help

Yet the Assembly proposed amendments that would have weakened the bill and given unfair favors to big ISPs, which oppose letting communities build their own broadband networks. After repeatedly stalling attempts at negotiation, refusing to consider amendments, and using their delays as an excuse to hide behind procedural minutae, the bill was shelved at the direction of Assembly leadership on August 30, prompting our call for them to act before the session ended.

Assembly leadership and Speaker Rendon chose the path of inaction, confusion, and division—instead of doing the work critical to serve Californians at school and work who are desperately in need for these critical infrastructure improvements while they seek to shelter in place.

Why We Can’t Let Up

We will keep fighting. We got so close to expanding broadband access to all Californians—and that’s why the resistance was so tenacious. The industry knows their regional monopolies are in jeopardy if someone else builds vastly superior and cheaper fiber to the home networks. Support is building from local governments across the state of California, which are ready to debt-finance their own fiber if they can receive a small amount of help from the state.

Californians see this for what it is: a willful failure of leadership, at the expense of schoolchildren, workers, those in need of telehealth services, and first responders.

By not acting now, the Assembly chose to leave millions of Californians behind—especially in rural areas and in communities of color that big ISPs have refused to serve. The pandemic has exposed how badly a private-only approach to broadband has failed us all. The Assembly failed us, too.

We’re thankful the Senate, the governor, and supporters like you stand ready to address the critical issue of Californians’ broadband needs. California must not wait to start addressing this problem. EFF will continue exploring all options to close the digital divide, whether that happens in a special California legislative session, or in the next session.

Ernesto Falcon

Digital Identification Must Be Designed for Privacy and Equity

2 months 3 weeks ago

With growing frequency, the digital world calls for verification of our digital identities. Designed poorly, digital identification can invade our privacy and aggravate existing social inequalities. So privacy and equity must be foremost in discussions about how to design digital identification.

Many factors contribute to one's identity; degrees, morals, hobbies, schools, occupations, social status, personal expression, etc. The way these are expressed looks different depending on the context. Sometimes, identity is presented in the form of paper documentation. Other times, it’s an account online.

Ever since people have been creating online accounts for various services and activities, the concept of the online identity has warped and been reshaped. In recent years, many people are discussing the idea of a “self-sovereign identity (SSI)” that lets you share your identity freely, confirm it digitally, and manage it independently—without the need of an intermediary between you and the world to confirm who you are. Such an identity is asynchronous, decentralized, portable, and most of all, in control of the identity holder. A distinct concept within SSI is “decentralized identifier,” which focuses more on the technical ecosystem where one controls their identity.

There has been a growing push for digital forms of identification. Proponents assert it is an easier and more streamlined way of proving one’s identity in different contexts, that it will lead to faster access to government services, and that it will make ID’s more inclusive.

Several technical specifications have been recently published that expand on this idea into real world applications. This post discusses two of them, with a focus on the privacy and equity implications of such concepts, and how they are deployed in practice.

The Trust Model

Major specifications that address digital identities place them in the “trust model” framework of the Issuer/Holder/Verifier relationship. This is often displayed in a triangle, and shows the flow of information between parties involving digital identification.

The question of who acts as the issuer and the verifier changes with context. For example, a web server (verifier) may ask a visitor (holder) for verification of their identity. In another case, a law enforcement officer (verifier) may ask a motorist (holder) for verification of their driver’s license. As in these two cases, the verifier might be a human or an automated technology. 

Issuers are generally institutions that you already have an established relationship with and have issued you some sort of document, like a college degree or a career certification. Recognizing these more authoritative relationships becomes important when discussing digital identities and how much individuals control them.

Verifiable Credentials

Now that we’ve established the framework of digital identity systems, let’s talk about what actually passes between issuers, holders, and verifiers: a verified credential. What is a verified credential?  Simply put, it is a claim that is trusted between an issuer, a holder, and a verifier.

In November 2019, the World Wide Web Consortium (W3C) published an important standard, the Verified Credential Data Model

This was built in the trust model format in a way that satisfies the principles of decentralized identity. The structure of a verified credential consists of three parts: a credential metadata, a claim, and a proof of that claim. The credential metadata can include information such as issue date, context, and type.

..

"id": "http://example.edu/credentials/1872",

"issuanceDate": "2010-01-01T19:73:24Z",

"type": ["VerifiableCredential", "AlumniCredential"],

...

The ID section in this VC gives way to a W3C drafted specification: Decentralized Identifiers. This specification was built with the principles of Decentralized/Sovereign Identity in mind, in the context of portability.

...

"id": "did:example:ebfeb1f712ebc6f1c276e12ec21",

"alumniOf": {

...

While these specifications provide structure, they do not guarantee integrity of the data.

Mobile Driver’s Licenses

The W3C is not the only standards body working to build specifications to define how digital identity is built and exchanged. The International Organization for Standardization has an as-yet unpublished standard that defines how a Mobile Driver’s License (mDL) application would function on a mobile device. This also follows the trust model discussed above, and extends it to how our phones could be used in these exchanges of verifying our driver’s licenses.

This specification isn’t centered on decentralized identity. Rather, it defines mobile portability of one’s government issued ID in a mobile application. It is relevant to discuss as one of the digital identification options different governments have tried. The focus of mobile driver’s licenses gives us a practical way to examine the exchange of data, anti-tampering systems, and data privacy. The specification discusses widely available and agreed-upon standards for dealing with session management, encryption, authentication, and storage.

“Digital First” Identities Could Lead to Privacy and Equity Last

These thorough specifications are a significant contribution to the development of digital identification. But the concept of “digital first” raises major concerns around privacy preservation, safety, and their impact on marginalized communities.

Both specifications recommend data minimization, avoiding collection of personally identifiable information (PII), proper auditing, proper consent and choice, and transparency. However, without a comprehensive federal data privacy law, these are just recommendations, not mandates. Our data is not generally protected and we currently suffer from private companies constantly mismanaging and unethically exchanging data about our everyday lives. Every time a digital ID holder uses their ID, there is an opportunity for the ID issuer and the ID verifier to gather personal data about the ID holder. For example, if a holder uses their digital ID to prove their age to buy a six-pack of beer, the verifier might make a record of the holder’s age status. Even though PII wouldn’t be exchanged in the credential itself, the holder may have payment info associated with this time in transaction. This collusion of personal information might be sold to data brokers, seized by police or immigration officials, stolen by data thieves, or misused by employees. This is why, at a minimum, having a “digital first” identity should be a choice by the citizen, and not a mandate by the government. 

Some of these privacy hazards might be diminished with “Zero-Knowledge Proofs”, which cryptographically confirm a particular value without disclosing that value or associated information. For example, such a proof might confirm that a holder received a degree from a university, without revealing the holder’s identity or any other personal data contained in that degree. The W3C and mDL specifications promote such anonymous methodologies. But these specs are dependent on all parties voluntarily doing their part to complete the Trust Model.

That will not always be the case. For example, when a holder presents their digital identification to a law enforcement official, that official will probably use that identification to gather as much information as they can about the holder. This creates special risks for members of our society, including immigrants and people of color, who already are  disparately vulnerable to abuse by police, border patrol, or other federal agents. Moreover, mandated digitized IDs are a troubling step towards a national ID database, which could centralize in one place all information about how a holder uses their ID. 

One could argue that these specifications do not themselves create national ID databases. But in practice, private digital ID companies that utilized biometric technology to confirm people’s identities are very active in these conversations of actual implementation.The W3C’s Verified Credentials recognize the privacy concern of persistent, long term identifiers about personal information. 

There also are privacy concerns in other applications of verified credentials. In California, Asm. Ian Calderon and Sen. Bob Hertzberg have proposed a bill (A.B. 2004) that would purport to verify dynamic and volatile information such as COVID-19 testing results, using a loosely interpreted application of the W3C’s Verified Credentials. We oppose this bill as a dangerous step towards immunity passports, second-class citizenship based on health status, and national digital identification. In this case, it’s not the specification itself that is the concern, but rather the use of it to justify creating a document that could cause new privacy hazards and exacerbate current inequality in society. Presenting whether or not you have been infected is a matter of privacy within itself, no matter how well thought out and secure the application used to platform it is.

When thinking about verified credentials, solutions to make personal information more portable and easy to share should not ignore the current state of data protection, or the lack of access to technology in our society. The principles of decentralizing one's information into their own ownership are completely related to, and contextualized by, privilege. Any application a government, company, or individual creates regarding identity, will always be political. Therefore, we must use technology in this context to reduce harm, not escalate it. 

Some potential uses of digital identification might create fewer privacy risks while helping people at society’s margins. There are ways that digital identifiers can respect privacy recommendations, such as in cases where people can use a one-time, static, digital document for confirmation, which is then destroyed after use. This can reduce situations in which  people are asked for an excessive amount of documentation just to access a service. This can especially benefit people marginalized by power structures in society. For example, some rental car companies require customers who want to use cash (who are disproportionately unbanked or underbanked people) to bring in their utility statements. A one-time digital identifier of home address might facilitate this transaction.  Likewise, government officials sometimes require a child’s immunization records to access family benefits like the WIC (Women, infants, and Children) nutrition program. A one-time digital identifier of immunization status might make this easier. These are examples of how verified credentials could improve privacy and address inequality, without culminating in a “true” decentralized identity.

The privacy recommendations in the W3C and mDL specs must be treated as a floor and not a ceiling. We implore the digital identity community and technologists to consider the risks to privacy and social equity. It can be exciting for a privileged person to be able to freely carry one’s information in a way that breaks down bureaucracy and streamline their life. But if such technology becomes a mandate, it could become a nightmare for many others.

Alexis Hancock

New Federal Court Rulings Find Geofence Warrants Unconstitutional

2 months 4 weeks ago

Two federal magistrate judges in three separate opinions have ruled that a geofence warrant violates the Fourth Amendment’s probable cause and particularity requirements. Two of these rulings, from the federal district court in Chicago, were recently unsealed and provide a detailed constitutional analysis that closely aligns with arguments EFF and others have been making against geofence warrants for the last couple years.

Geofence warrants, also known as reverse location searches, are a relatively new investigative technique used by law enforcement to try to identify a suspect. Unlike ordinary warrants for electronic records that identify the suspect in advance of the search, geofence warrants essentially work backwards by scooping up the location data from every device that happened to be in a geographic area during a specific period of time in the past. The warrants therefore allow the government to examine the data from individuals wholly unconnected to any criminal activity and use their own discretion to try to pinpoint devices that might be connected to the crime. Earlier this summer, EFF filed an amicus brief in People v. Dawes, a case in San Francisco Superior Court, arguing that a geofence warrant used there violates deep-rooted Fourth Amendment law. 

In Chicago, the government applied to a magistrate judge for a geofence warrant as part of an investigation into stolen pharmaceuticals. Warrant applications like these occur before there is a defendant in a case, so they are almost never adversarial (there’s no lawyer representing a defendant’s interest), and we rarely find out about them until well after the fact, which makes these unsealed opinions all the more interesting. 

Here, the government submitted an application to compel Google to disclose unique device identifiers and location information for all devices within designated areas during forty-five minute periods on three different dates. The geofenced areas were in a densely populated city near busy streets with restaurants, commercial establishments, a medical office, and “at least one large residential complex, complete with a swimming pool, workout facilities, and other amenities associated with upscale urban living.” 

As we’ve seen with other geofence warrants, the government’s original application proposed a three-step protocol to obtain the information. At the first step, Google would produce detailed and anonymized location data for devices that reported their location within the geofences for three forty-five minute periods. After that, the government would review that information and produce a list of devices for which it desired additional information. Then at the last step, Google would be required to produce information identifying the Google accounts for the requested devices. 

On July 8, in the first unsealed opinion, U.S. Magistrate Judge M. David Weisman rejected the government’s request, finding “two obvious constitutional infirmities.” First, the court determined that the warrant was overbroad. While the court agreed that the government had established probable cause that a single cell phone user within the geofence might have commited a crime, the court held there was no probable cause to believe all the other devices in the area were connected to the crime as well. Importantly, the court rejected an argument we’ve seen the government make in the past that the search warrant was narrowly tailored because it covered only limited areas over short time periods. The court noted: 

the geographic scope of [the] request in a congested urban area encompassing individuals’ residences, businesses, and healthcare providers is not ‘narrowly tailored’ when the vast majority of cellular telephones likely to be identified in this geofence will have nothing whatsoever to do with the offenses under investigation. 

Second, the court determined that the warrant application failed to meet the Fourth Amendment’s particularity requirement. The court emphasized that there was nothing in the three-step protocol stopping the government from obtaining the user information for every device within the geofences. 

In response to the court’s order, the government submitted an amended application by slightly narrowing the geographic scope of the geofences. A second magistrate judge, Judge Gabriel Fuentes rejected that application in an order that remains under seal. 

Then, the government came back to the court yet again. This time, the government proposed eliminating the third step of the protocol. Judge Fuentes, however, was unmoved by the new changes because the government admitted it could just use a separate subpoena to get that detailed user information. In a 42-page decision rejecting the government’s application, Judge Fuentes, in large part, echoed Judge Wiesman’s earlier opinion. Notably, the court looked back to the Supreme Court’s decision in Ybarra v. Illinois (1979), a case that famously established that a warrant to search a bar and a bartender didn’t give police the power to search every person who happened to be in the bar. The court then rightly noted the similarities between the government’s unconstitutional conduct in Ybarra and the geofence warrant. It wrote that, similar to Ybarra, the government was seeking “unlimited discretion” to search all users’ devices in a given area—including users who merely walked along the sidewalk next to a business or lived in the residences above it—based on nothing more than their proximity to a suspected crime.

These decisions are good news. Judges too often rubber stamp warrant applications. But here, in careful, well-reasoned opinions that reflect what Judge Fuentes described as “[l]ongstanding Fourth Amendment principles of probable cause and particularity,” both judges stood up to protect constitutional rights in the face of government overreach. 

Nonetheless, as the judges noted at various points in their opinions, geofence warrants are becoming more and more prevalent. Judge Weisman wrote at the end of his opinion: 

[t]he government's undisciplined and overuse of this investigative technique in run-of the-mill cases that present no urgency or imminent danger poses concerns to our collective sense of privacy and trust in law enforcement officials. 

Indeed, statistics from Google confirm that: “Year over year, Google has observed a 1,500% increase in the number of geofence requests it received in 2018 compared to 2017; and [as of December 2019], the rate [] increased from over 500% from 2018 to 2019.” And news reports have revealed that prosecutors have used geofence warrants across the country. The risk of error and abuse with these warrants isn’t abstract. Last year, NBC News reported about an innocent person who got caught up in a geofence warrant.

That is deeply worrying. Indiscriminate searches like geofence warrants both put innocent people in the government’s crosshairs for no good reason and give law enforcement unlimited discretion that can be deployed arbitrarily and invidiously. But the Framers of the Constitution knew all too well about the dangers of overbroad warrants and they enacted the Fourth Amendment to outlaw them.

Related Cases: Carpenter v. United States
Jennifer Lynch

California’s Assembly May Do Nothing to Help on Broadband—Thanks to Big ISPs

2 months 4 weeks ago

Update: Assembly Speaker Anthony Rendon has moved to table any efforts to close the digital divide this year. The pandemic has exposed how vital high-speed broadband is to the daily lives of all Californians. The Legislature must conclude all business by midnight on August 31. Call your Assemblymember TODAY and tell them to put the needs of Californians working and learning amid the pandemic over the interests of big ISPs.

TAKE ACTION

California: Call On Your Assemblymember To Act on Broadband Now

Original Post: As the final hours of the California legislative session tick down, it appears that the California Assembly may decide to not move forward on S.B. 1130 or any other legislative deal to start addressing the digital divide this year.

There has been broad support for legislation to close the digital divide: from rural and urban representatives in the California Senate, from small businesses and consumer advocates, and from the governor’s office. But big Internet Service Providers (ISPs) oppose any and all such plans, from boosting local communities’ ability to bond finance fiber, to extending financing for infrastructure to areas that the major ISPs have ignored through a tiny fee that the ISPs already pay. In fact, they have opposed virtually every idea that would challenge their slow, non-broadband Internet monopoly profits just as strongly as every effort to connect the completely unserved. And that opposition from big ISPs appears to have been too much for the California Assembly to ignore.

TAKE ACTION

California: Call On Your AssemblyMember To Act on Broadband Now

This is despite all the suffering people are enduring from a lack of universal access to robust Internet connections during the pandemic. Our Assembly appears unwilling to stand up for all the parents who are trying to work through the pandemic, while also educating their kids remotely.  Instead, they’re bowing to the very companies that have actively caused that pain through systemically underinvesting in neighborhoods across the state while simultaneously reaping billions in profits from your monthly bills. By pressuring the California Assembly to literally do nothing during the crisis, large national ISP lobbyists are on the verge of winning arguably one of the biggest legislative victories in decades.

The people hurting most right now will pay the greatest price. We are regularly seeing photos of children having to do their school work in fast food restaurant parking lots because they can’t get a connection at home. Everyone knows this is a serious problem that warrants a serious response. As former State Senator Kevin De Leon remarked, this generation deserves better. California’s children deserve better.

From https://twitter.com/kdeleon/status/1299386969873461248

Two students sit outside a Taco Bell to use Wi-Fi so they can 'go to school' online.

This is California, home to Silicon Valley...but where the digital divide is as deep as ever.

Where 40% of all Latinos don't have internet access. This generation deserves better. pic.twitter.com/iJPXvcxsLQ

— Kevin de Leόn (@kdeleon) August 28, 2020

There is no question that the major national ISPs have systemically avoided building modern connections to low-income neighborhoods in cities. Their fiber deployment decisions of high-speed access, dating back more than a decade, are now causing active harm to communities of color. These decisions force children from their homes to fast food restaurant parking lots to pursue their education—because, of course, those same ISPs happily built fiber infrastructure to those fast food mega-corporations. There is no defensible argument that supports this racially discriminatory digital redlining. Yet the California Assembly, facing this evidence, may opt to do nothing about it.

Doing nothing also means choosing to leave millions in rural communities behind. Slow Internet monopolies who make billions selling inferior, obsolete services to rural Californians have denied those communities adequate service for even longer.  More than 2 million Californians rely on the now-bankrupt Frontier Communications for access to the Internet. Frontier, which in its filings to the government, revealed that millions of its customers could have been profitably upgraded to fiber. But, with its slumlord mentality, Frontier opted to pocket those investments for greater profits for as long as possible, until the house of cards collapsed. And it’s not the business executives that profited handsomely from this exploitation that are trapped inside that house. It’s rural Californians.

Shame is the only word that can describe the collective inaction of the California Assembly. It is a shame the Assembly is choosing to leave their fellow Californians behind—despite support for forward-thinking broadband plans from the California Senate, and from the Governor of California. It is a shame that the pandemic has not prompted a deeper realization among the Assembly that people need help. And it is a shame they will not recognize that government policy and money are the means to provide that help.

We have tried the private-only model for decades now, and we are living with the result today. There is no question: it has not worked. If you are in California and you think our legislature shouldn’t close for the year before taking decisive action on broadband access, call them now. They have the solutions in hand, and both the California Senate and Governor are willing to act. The Assembly just has to be willing to say no to Big ISPs and vote yes on a better future.

TAKE ACTION

California: Call On Your AssemblyMember To Act on Broadband Now

Ernesto Falcon

One Database to Rule Them All: The Invisible Content Cartel that Undermines the Freedom of Expression Online

3 months ago

Every year, millions of images, videos and posts that allegedly contain terrorist or violent extremist content are removed from social media platforms like YouTube, Facebook, or Twitter. A key force behind these takedowns is the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that seeks to “prevent terrorists and violent extremists from exploiting digital platforms.” And unfortunately, GIFCT has the potential to have a massive (and disproportionate) negative impact on the freedom of expression of certain communities.

Social media platforms have long struggled with the problem of extremist or violent content on their platforms. Platforms may have an intrinsic interest in offering their users an online environment free from unpleasant content, which is why most social media platforms’ terms of service contain a variety of speech provisions. During the past decade, however, social media platforms have also come under increasing pressure from governments around the globe to respond to violent and extremist content on their platforms. Spurred by the terrorist attacks in Paris and Brussels in 2015 and 2016, respectively, and guided by the shortsighted belief that censorship is an effective tool against extremism, governments have been turning to content moderation as a means to fix international terrorism.

Commercial content moderation is the process through which platforms—more specifically, human reviewers or, very often, machines—make decisions about what content can and cannot be on their sites, based on their own Terms of Service, “community standards,” or other rules. 

During the coronavirus pandemic, social media companies have been less able to use human content reviewers, and are instead increasingly relying on machine learning algorithms to moderate content as well as flag it.  Those algorithms, which are really just a set of instructions for doing something, are fed with an initial set of rules and lots of training data in the hopes that they will learn to identify similar content  But human speech is a complex social phenomenon and highly context-dependent; inevitably, content moderation algorithms make mistakes. What is worse, because machine-learning algorithms usually operate as black boxes that do not explain how they arrived at a decision, and as companies generally do not share either the basic assumptions underpinning their technology or their training data sets, third parties can do little to prevent those mistakes. 

This problem has become more acute with the introduction of hashing databases for tracking and removing extremist content. Hashes are digital "fingerprints" of content that companies use to identify and remove content from their platforms. They are essentially unique, and allow for easy identification of specific content. When an image is identified as “terrorist content,” it is tagged with a hash and entered into a database, allowing any future uploads of the same image to be easily identified.

This is exactly what the GIFCT initiative aims to do: Share a massive database of alleged ‘terrorist’ content, contributed voluntarily by companies, amongst members of its coalition. The database collects ‘hashes’, or unique fingerprints, of alleged ‘terrorist’, or extremist and violent content, rather than the content itself. GIFCT members can then use the database to check in real time whether content that users want to upload matches material in the database. While that sounds like an efficient approach to the challenging task of correctly identifying and taking down terrorist content, it also means that one single database might be used to determine what is permissible speech, and what is taken down—across the entire Internet. 

Countless examples have proven that it is very difficult for human reviewers—and impossible for algorithms—to consistently get the nuances of activism, counter-speech, and extremist content itself right. The result is that many instances of legitimate speech are falsely categorized as terrorist content and removed from social media platforms. Due to the proliferation of the GIFCT database, any mistaken classification of a video, picture or post as ‘terrorist’ content echoes across social media platforms, undermining users' right to free expression on several platforms at once. And that, in turn, can have catastrophic effects on the Internet as a space for memory and documentation. Blunt content moderation systems can lead to the deletion of vital information not available elsewhere, such as evidence of human rights violations or war crimes. For example, the Syrian Archive, an NGO dedicated to collecting, sharing and archiving evidence of atrocities committed during the Syrian war reports that hundred of thousand videos of war atrocities are removed by YouTube annually. The Archive estimates that take down rates for videos documenting Syrian human rights violations is circa 13%, a number that has almost doubled to 20% in the wake of the coronavirus crisis. As noted, many social media platforms, including YouTube, have been using algorithmic tools for content moderation more heavily than usual, resulting in increased takedowns. If, or when, YouTube contributes hashes of content that depicts Syrian human rights violations, but has been tagged as ‘terrorist’ content by YouTube’s algorithms to the GIFCT database, that content could be deleted forever across multiple platforms. 

The GIFCT content cartel not only risks losing valuable human rights documentation, but also has a disproportionately negative effect on some communities. Defining ‘terrorism’ is a inherently political undertaking, and rarely stable across time and space. Absent international agreement on what exactly constitutes terrorist, or even violent and extremist, content, companies look at the United Nations’ list of designated terrorist organizations or the US State Department’s list of Foreign Terrorist Organizations. But those lists mainly consist of Islamist organizations, and are largely blind to, for example, right-wing extremist groups. That means that the burden of GIFCT’s misclassifications falls disproportionately on Muslim and Arab communities and highlights the fine line between an effective initiative to tackle the worst content online and sweeping censorship.

Ever since the attacks on two Mosques in Christchurch in March 2019, GIFCT has been more prominent than ever. In response to the shooting, during which 51 people were killed, French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern launched the Christchurch Call. That initiative, which aims to eliminate violent and extremist content online, foresees a prominent role for GIFCT. In the wake of this renewed focus on GIFCT, the initiative announced that it would evolve to an independent organization, including a new Independent Advisory Committee (IAC) to represent the voices of civil society, government, and inter-governmental entities. 

However, the Operating Board, where real power resides, remains in the hands of industry. And the Independent Advisory Committee is already seriously flawed, as a coalition of civil liberties organizations has repeatedly noted. 

For example, governments participating in the IAC are likely to leverage their position to influence companies’ content moderation policies and shape definitions of terrorist content that fit their interests, away from the public and eye and therefore lacking accountability. Including governments in the IAC could also undermine the meaningful participation of civil society organizations as many are financially dependent on governments, or might face threats of reprisals for criticism government officials in that forum. As long as civil society is treated as an afterthought, GIFCT will never be an effective multi-stakeholder forum. GIFCT’s flaws and their devastating effects on the freedom of expression, human rights, and the preservation of evidence of war crimes have been known for years. Civil societies organizations have tried to help reform the organization, but GIFCT and its new Executive Director have remained unresponsive. Which leads to the final problem with the IAC: leading NGOs are choosing not to participate at all.  

Where does this leave GIFCT and the millions of Internet users its policies impact? Not in a good place. Without meaningful civil society representation and involvement, full transparency and effective accountability mechanisms, GIFCT risks becoming yet another industry-led forum that promises multi-stakeholderism but delivers little more than government-sanctioned window-dressing. 

Svea Windwehr

Voter Advocacy Orgs Sue Trump Administration for Executive Order Threatening Social Media Censorship

3 months ago
Unconstitutional Ploy Attempts to Coerce Companies to Curate Speech to Favor the President

San Francisco – The Electronic Frontier Foundation (EFF) has joined forces with Protect Democracy and Cooley LLP to represent five advocacy organizations suing President Trump and others in his administration for an unconstitutional executive order threatening their ability to receive accurate information free from government censorship. The plaintiffs are Common Cause, Free Press, Maplight, Rock the Vote, and Voto Latino.

“Our clients want to make sure that voting information found online is accurate, and they want social media companies to take proactive steps against misinformation,” said David Greene, EFF's Civil Liberties Director. “Social media platforms have the right to curate content however they like—whether it is about voting or not—but President Trump’s executive order punishes platforms for doing just that. Misusing an Executive Order to force companies to censor themselves or others is wrong and dangerous in the hands of any president. Here it’s a transparent attempt to retaliate against Twitter for fact-checking the president’s posts, as well as an obvious threat to any other company that might want to do the same.”

Trump signed the “Executive Order on Preventing Online Censorship” in May, after a well-publicized fight with Twitter. First, the president tweeted false claims about the reliability of online voting, and then Twitter decided to append a link to “get the facts about mail-in ballots.” Days later, Trump signed the order, which tasks government agencies with concocting a process for deciding whether any platform’s decision to moderate user-generated content was done with “good faith.” If found to be in bad faith, the order then asks for social media companies to lose millions of dollars in government advertising, as well as their legal protections under Section 230. Section 230 is the law that allows online services—like Twitter, Facebook, and others—to host and moderate diverse forums of users’ speech without being liable for their users’ content.

In the lawsuit filed in the United States District Court for the Northern District of California today, the plaintiffs argue that the executive order is designed to chill social media companies from moderating the president’s content in a way that he doesn’t like—particularly correcting his false statements about elections. In fact, since the order, Trump has tweeted multiple falsehoods about voting without any flagging by Twitter.

“Voters have a constitutional right to receive accurate information about voting alternatives without government interference, especially from a self-interested president who is lying to gain an advantage in the upcoming election. So when Trump retaliates against private social media companies for fact-checking his lies, it’s not only a First Amendment violation—it’s the kind of behavior you’d expect to see from a dictator,” said Kristy Parker, counsel with Protect Democracy. “In the midst of a global pandemic, when far more voters than usual may opt to vote by mail to protect their personal health, the president’s authoritarian actions are especially egregious.”

“We joined this cause to protect voters’ access to accurate information about voting during the pandemic, free from unconstitutional governmental meddling that is being done to advance a particular political viewpoint,” said Michael Rhodes, who chairs Cooley’s global cyber/data/privacy and Internet practices. “We want all voters to be able to make informed and independent political choices and that requires protecting online platforms’ ability to curate information without fear of reprisal from the federal government.”

For the full complaint in Rock the Vote et al. v. Trump:
https://www.eff.org/document/rock-vote-v-trump

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org AaronMackeyStaff Attorneyamackey@eff.org
Rebecca Jeschke

Our EU Policy Principles: User Controls

3 months ago

As the EU is gearing up for a major reform of key Internet regulation, we are introducing the principles that will guide our policy work surrounding the Digital Services Act (DSA). We believe the DSA is a key opportunity to change the Internet for the better; to question the paradigm of capturing users’ attention that shapes our online environments so fundamentally, and to restore users’ autonomy and control. In this post, we introduce policy principles that aim to strengthen users' informational self-determination and thereby promote healthier online communities that allow for deliberative discourse.

A Chance to Reinvent Platform Regulation 

In a few months, the European Commission will introduce its much anticipated proposal for the Digital Services Act, the most significant reform of European platform regulation in two decades. The Act, which will modernize the backbone of the EU’s Internet legislation—the e-Commerce Directive—will set out new responsibilities and rules for online platforms. 

EFF supports the Commission’s goal of promoting an inclusive, fair and accessible digital society. We believe that giving users more transparency and autonomy to understand and shape the forces that determine their online experiences is key to achieving this goal. Currently, there is a significant asymmetry between users and powerful gatekeeper platforms that control much of our online environment. With the help of opaque algorithmic tools, platforms distribute and curate content, collect vast amounts of data on their users and flood them with targeted advertisements. While platforms acquire (and monetize) a deep understanding of their users, both on an individual and collective level, users are in the dark about how their data is collected, exploited for commercial purposes and leveraged to shape their online environments. Not only are users not informed about the intricate algorithms that govern their speech and their actions online; platforms also unilaterally formulate and change community guidelines and terms of service, often without even informing users of relevant changes. 

The DSA is a crucial chance to enshrine the importance of user control and to push platforms to be more accountable to the public. But there is also a risk that the Digital Services Act will follow the footsteps of the recent regulatory developments in Germany and France. The German NetzDG and the French Avia bill (which we helped bring down in court) show a worrying trend in the EU to force platforms to police users’ content without counter-balancing such new powers with more user autonomy, choice and control. 

EFF will work with EU institutions to fight for users’ rights, procedural safeguards, and interoperability while preserving the elements that made Europe’s Internet regulation a success: limited liability for online platforms for user-generated content, and a clear ban on filtering and monitoring obligations.

Principle 1: Give Users Control Over Content

Many services like Facebook and Twitter originally presented a strictly chronological list of posts from users’ friends. Over time, most large platforms have traded that chronological presentation for more complex (and opaque) algorithms that order, curate and distribute content, including advertising, and other promoted content. These algorithms, determined by the platform, are not necessarily centered on satisfying users’ needs, but usually pursue the sole goal of maximizing the time and attention people spend on a given website. Posts with more “engagement” are prioritised, even if that engagement is driven by strong emotions like anger or despair provoked by the post. While users sometimes can return to the chronological stream, the design of platforms’ interfaces often nudges them to switch back. Interfaces that are misleading or manipulating users, including “dark patterns”, often contravene core principles of European data protection laws and should be addressed in the Digital Services Act where appropriate.

Platforms’ algorithmic tools leverage their intimate knowledge of their users, assembled from thousands of seemingly unrelated data points. Many of the inferences drawn from that data feel unexpected to users: platforms have access to data that reaches further back than most users realize, and are able to draw conclusions from both individual and collective behavior. Assumptions about users’ preferences are thus often made by making inferences from seemingly unrelated data points. This may shape (and often limit) the ways in which users can interact with content online and can also amplify misinformation and polarization in ways that can undermine the transparent, deliberative exchange of information on which democratic societies are built.

Users do not have to accept this. There are many third-party plugins that re-frame social platforms’ appearance and content according to peoples’ needs and preferences.  But right now, most of these plugins require technical expertise to discover and install, and platforms have a strong incentive to hide and prevent user adoption of such independent tools. The DSA is Europe’s golden opportunity to create a friendlier legal environment to encourage and support this user-oriented market. The regulation should  support interoperability and permit competitive compatibility, and should establish explicit, enforceable rules against over-aggressive terms of service that seek to forbid all reverse-engineering and interconnection. Beyond the Digital Services Act, the EU must actively support open source and commercial projects in Europe that offer localised or user-empowering front-ends to platforms, and help foster a vibrant and viable market for these tools.

Giving people—as opposed to platforms—more control over content is a crucial step to addressing some of the most pervasive problems online that are currently poorly managed through content moderation practices. User controls should not require a heightened threshold of technological literacy needed to traverse the web safely. Instead, users of social media platforms with significant market power should be empowered to choose content they want to interact with—and filter out content they do not want to see—in a simple and user-friendly manner. Users should also have the option to decide against algorithmically-curated recommendations altogether, or to choose other heuristics to order content. 

Principle 2: Algorithmic Transparency

Besides being given more control over the content with which they interact, users also deserve more transparency from companies to understand why content or search results are shown to them—or hidden from them. Online platforms should provide meaningful information about the algorithmic tools they use in content moderation (i.e., content recommendation systems, tools for flagging content) and content curation (for example in ranking or downranking content). Platforms should also offer easily accessible explanations that allow users to understand when, for which tasks, and to which extent algorithmic tools are used. To alleviate the burden on individual users to make sense of how algorithms are used, platforms with significant market power should allow independent researchers and relevant regulators to audit their algorithmic tools to make sure they are used as intended.

Principle 3: Accountable Governance

Online platforms govern their users through their terms of service, community guidelines, or standards. These documents often entail the fundamental rules that determine what users are afforded to do on a platform, and what behavior is constrained. Platforms regularly update those documents, often in minor but sometimes in major ways—and usually without consulting or notifying their users of the changes. Users of such platforms must be notified whenever the rules that govern them change, must be asked for their consent and should be informed of the consequences of their choice. They should also be provided with a meaningful explanation of any substantial changes in a language they understand. Additionally, platforms should present their terms of service in machine-readable format and make all previous versions of their terms of service easily accessible to the public.

Principle 4: Right to Anonymity Online

There are countless reasons why individuals may not want to share their identity publicly online. While anonymity used to be common on the Internet, it has become increasingly more difficult to remain anonymous online. In their hopes to tackle hate speech or “fake news”, policymakers in the EU and beyond have been proposing duties for platforms to enforce the use of legal names.

For many people, however—including members of the LGBTQ+ community, sex workers, and victims of domestic abuse—such rules could have devastating effects and lead to harassment or other forms of attribution. We believe that as a general principle, Member States should respect the will of individuals not to disclose their identities online. The Digital Services Act should affirm users’ informational self-determination also in this regard and introduce the European right to anonymity online. Deviating terms of service should be subject to fairness control.

Svea Windwehr

Throwing Out the FTC's Suit Against Qualcomm Moves Antitrust Law in the Wrong Direction

3 months ago

The government bestows temporary monopolies in the form of patents to promote future innovation and economic growth. Antitrust law empowers the government to break up monopolies when their power is so great and their conduct is so corrosive of competition that they can dictate market outcomes without worrying about their rivals. In theory, patent and antitrust law serve the same goals—promoting economic and technological development—but in practice, they often butt heads.

The relationship between antitrust and patent law is especially thorny when it comes to “standards-essential patents” or “SEPs.” These are patents that cover technologies considered “essential” for implementing standards—agreed-upon rules and protocols that allow different manufacturers’ devices to communicate with each other using shared network infrastructure. Some technology standards become standards by achieving widespread adoption through market forces (the QWERTY keyboard layout is on example). But many are the result of extensive deliberation and cooperation among industry players (including competitors), like the MP3 audio compression and 3G wireless communication standards.

Standards can enhance competition and consumer choice, but they also massively inflate the value of patents deemed essential to the standard, and give their owners the power to sue companies that implement the standard for money damages or injunctions to block them from using their SEPs. When standards cover critical features like wireless connectivity, SEP owners wield a huge amount of “hold-up” power because their patents allow them to effectively block access to the standard altogether. That lets them charge unduly large tolls to anyone who wants to implement the standard.

To minimize that risk, standard-setting organizations typically require companies that want their patented technology incorporated into a standard to promise in advance to license their SEPs to others on fair, reasonable, and non-discriminatory (FRAND) terms. But that promise strikes at a key tension between antitrust and patent law: patent owners have no obligation to let anyone use technology their patent covers, but to get those technologies incorporated into standards, patent owners usually have to promise that they will give permission to anyone who wants to implement the standard as long as they pay a reasonable license fee. 

Qualcomm is one of the most important and dominant companies in the history of wireless communication standards. It is a multinational conglomerate that has owned patents on every major wireless communication standard since its first CDMA patent in 1985, and it participates in the standard-setting organizations that define those standards. Qualcomm is somewhat unique in that it not only licenses SEPs, but also supplies the modem chips used by a wide range of devices. These include chips that implement wireless communication standards, which lie at the heart of every mobile computing device.

Although Qualcomm promised to license its SEPs (including patents essential to CDMA, 3G, 4G, and 5G) on FRAND terms, its conduct has to many looked unfair, unreasonable, and highly discriminatory. In particular, Qualcomm has drawn scrutiny for bundling tens of thousands of patents together—including many that are not standard-essential—and offering portfolio-only licenses no matter what licensees actually want or need; refusing to sell modem chips to anyone without a SEP license and threatening to withhold chips from companies trying to negotiate different license terms; refusing to license anyone other than original-equipment manufacturers (OEMs); and insisting on royalties calculated as a percentage of the sale price of a handset sold to end users for hundreds of dollars, despite the minimal contribution of any particular patent to the retail value.

In 2017, the U.S. Federal Trade Commission sued Qualcomm for violating both sections of the Sherman Antitrust Act by engaging in a number of anticompetitive SEP licensing practices. In May 2019, the U.S. District Court for the Northern District of California agreed with the FTC, identifying numerous instances of Qualcomm’s unlawful, anticompetitive conduct in a comprehensive 233-page opinion. We were pleased to see the FTC take action and the district court credit the overwhelming evidence that Qualcomm’s conduct is corrosive to market-based competition and threatens to cement Qualcomm’s dominance for years to come.

But this month, a panel of judges from the Court of Appeals for the Ninth Circuit unanimously overturned the district court’s decision, reasoning that Qualcomm’s conduct was “hypercompetitive” but not “anticompetitive,” and therefore not a violation of antitrust law. To reach that result, the Ninth Circuit made the patent grant more powerful and antitrust law weaker than ever.

According to the Ninth Circuit, patent owners don’t have a duty to let anyone use what their patent covers, and therefore Qualcomm had no duty to license its SEPs to anyone. But that framing requires ignoring the promises Qualcomm made to license its SEPs on reasonable and non-discriminatory terms—promises that courts in this country and around the world have consistently enforced. It also means ignoring antitrust principles like the essential facilities doctrine, which limits the ability of a monopolist with hold-up power over an essential facility (like a port) to shut out rivals. Instead, the Ninth Circuit held rather simplistically that a duty to deal could arise only if the monopolist had provided access, and then reversed its policy.

But even when Qualcomm restricted its licensing policies in critical ways, the Ninth Circuit found reasons to approve those restrictions. For example, Qualcomm stopped licensing its patents to chip manufacturers and started licensing them only to OEMs. This had a major  benefit: it let Qualcomm charge a much higher royalty rate based on the high retail price of the end user devices, like smartphones and tablets, that OEMs make and sell. If Qualcomm had continued to license to chip suppliers, its patents would be “exhausted” once the chips were sold to OEMs, extinguishing Qualcomm’s right to assert its patents and control how the chips were used.

Patent exhaustion is a century-old doctrine that protects the rights of consumers to use things they buy without getting the patent owner’s permission again and again. Patent exhaustion is important because it prevents price-gouging, but also because it protects space for innovation by letting people use things they buy freely, including to build innovations of their own. The doctrine thus helps patent law serve its underlying goal—promoting economic growth and innovation. In other words, the doctrine of exhaustion is baked into the patent grant; it is not optional. Nevertheless, the Ninth Circuit wholeheartedly approved of Qualcomm’s efforts to avoid exhaustion—even when that meant cutting off access to previous licensees (chip-makers) in ways that let Qualcomm charge far more in licensing fees than its SEPs could possibly have contributed to the retail value of the final product.

It makes no sense that Qualcomm could contract around a fundamental principle like patent exhaustion, but at the same time did not assume any antitrust duty to deal under these circumstances. Worse, it’s harmful for the economy, innovation, and consumers. Unfortunately, the kind of harm that antitrust law recognizes is limited to harm affecting “competition” or the “competitive process.” Antitrust law, at least as the Ninth Circuit interprets it, doesn’t do nearly enough to address the harm downstream consumers experience when they pay inflated prices for high-tech devices, and miss out on innovation that might have developed from fair, reasonable, and non-discriminatory licensing practices.

We hope the FTC sticks to its guns and asks the Ninth Circuit to go en banc and reconsider this decision. Otherwise, antitrust law will become an even weaker weapon against innovation-stifling conduct in technology markets.   

Alex Moss

California: Tell Your Senators That Ill-Conceived “Immunity Passports” Won’t Help Us

3 months ago

Californians should not be forced to present their smartphones to enter public places. But that’s exactly what A.B. 2004 would do, by directing the state to set up a blockchain-based system for “immunity passports”: a verified health credential that shows the results of someone’s last COVID-19 test, and uses those to grant access to public places.

By claiming that blockchain technology is part of a unique solution to the public health crisis we’re in, AB 2004 is opportunism at its worst. We are proud to stand with Mozilla and the American Civil Liberties Union’s California Center for Advocacy and Policy in opposing this bill. We encourage you to tell your senator to oppose it, too.

Take Action

Tell Your Senator: Immunity Passports Are a Bad Idea

While the latest version of A.B. 2004 steps back from previous plans to create a pilot program for immunity passports, it’s still written to push a hasty and poorly planned system onto Californians. The bill would empower the California Department of Consumer Affairs (CDCA) to authorize health care providers to issue verifiable health credentials, establish procedures for doing so, and maintain a blockchain registry of such issuers.

By claiming that blockchain technology is part of a unique solution to the public health crisis we’re in, AB 2004 is opportunism at its worst

But the bill says nothing about how long a credential should be valid, how it should be updated, or how it can be revoked if you’re exposed or even infected after you receive the passport. It doesn’t say anything about how those procedures should interact with existing medical privacy laws. And while the bill would require CDCA to consult with a working group (also created by the latest version of the bill) that includes civil liberties and privacy representatives, CDCA can ignore those recommendations. The bill doesn’t even limit when or how CDCA may exercise its powers.

And there are many problems with the underlying concept of immunity passports. In the short term, medical experts have warned it’s too early to use them for the COVID-19 pandemic. The World Health Organization in April warned against the idea. The WHO said that the medical community’s understanding of SARS-Cov-2—the virus that causes COVID-19—was not sufficient to certify that those who have antibodies in their system posed no risk to others.

EFF also opposes the very purpose of these credentials which, according to the bill's fact sheet and official analysis, is to identify those who should be excluded from workplaces, travel, and "any other processes." That has ramifications beyond the current pandemic. Handing your phone over to someone—a security guard, a law enforcement officer—creates the significant risk that they may look through other information on the device. You should never have to do that to enter your workplace or your school.

Finally, by suggesting deploying credentials that rely on having a smartphone to control access to public spaces, A.B. 2004 would limit those without smartphones—often lower-income Californians—from being able to move freely. That ensures that the Californians who are disproportionately hit hardest by COVID-19 are also those hurt most by this bill—now and in the future. Some have suggested that the accessibility issue can be addressed by allowing people to print out a paper version of a credential. That would exacerbate the security issues, and not address any of the privacy, security, or safety concerns. In fact, it could make it even easier to present a health credential that isn’t current or valid. It makes it even more difficult to verify the credential and makes it easier for someone to present another person's immunity passport as their own. 

The latest version of A.B. 2004 doesn’t take any of these concerns seriously. It instead pushes a system, based on unproven science, that would give people false hope of returning to a normal life. As Mozilla wrote in its blog post opposing an earlier version of the bill:

“A better approach would be [to] establish design principles, guardrails, and outcome goals up front…[i]mportantly, the process should build in the possibility that no technical solution is suitable, even if this outcome forces policymakers back to the drawing board.”

A.B. 2004’s authors still aren’t listening. But you can tell your state senator: don’t let those taking advantage of the pandemic push California into making bad policy.

Take Action

Tell Your Senator: Immunity Passports Are a Bad Idea

Hayley Tsukayama

If Privacy Dies in VR, It Dies in Real Life

3 months ago

If you aren’t an enthusiast, chances are you haven’t used a Virtual Reality (VR) or Augmented Reality (AR) headset. The hype around this technology, however, is nearly inescapable. We’re not just talking about dancing with lightsabers; there’s been a lot of talk about how VR/AR will revolutionize entertainment, education, and even activism. EFF has long been interested in the potential of this technology, and has even developed our own VR experience, Spot the Surveillance, which places users on a street corner amidst police spying technologies. 

It’s easy to be swept up in the excitement of a new technology, but utopian visions must not veil the emerging ethical and legal concerns in VR/AR. The devices are new, but the tech giants behind them aren’t. Any VR/AR headset you use today is likely made by a handful of corporate giants—Sony, Microsoft, HTC, and Facebook. As such, this budding industry has inherited a lot of issues from their creators. VR and AR hardware aren’t household devices quite yet, but if they succeed, there’s a chance they will creep into all of our personal and professional lives guided by the precedents set today.  

A Step Backwards: Requiring Facebook Login for Oculus

This is why Oculus’ announcement last week shocked and infuriated many users. Oculus, acquired by Facebook in 2014, announced that it will require a Facebook account for all users within the next 2 years. At the time of the acquisition Oculus offered distressed users an assurance that “[y]ou will not need a Facebook account to use or develop for the Rift [headset].” 

There’s good cause to be alarmed. Eliminating alternative logins can force Oculus users to accept Facebook’s Community Standards, or risk potentially bricking their device. With this lack of choice, users can no longer freely give meaningful consent and lose the freedom to be anonymous on their own device. That is because Oculus owners will also need to adopt Facebook’s controversial real name policy.  The policy requires users to register what Facebook calls their “authentic identity”—one known by friends and family and found on acceptable documents—in order to use the social network. Without anonymity, Oculus leaves users in sensitive contexts out to dry, such as VR activism in Hong Kong or LGBTQ+ users who can not safely reveal their identity.

Logging into Facebook on an Oculus product already shares with Facebook to inform ads when you logged into a Facebook account. Facebook already has a vast collection of data, collected from across the web and even your own devices. Combining this with sensitive biometric and environmental data detected by Oculus headsets furthers tramples user privacy. And Facebook should really know—the company recently agreed to pay $650 million for violating Illinois’ biometric law (BIPA) for collecting user biometric data without consent. However, for companies like Facebook, which are built on capturing your attention and selling it to advertisers, this is a potential gold mine. Having eye-tracking data on users, for example, can cement a monopolistic power in online advertisements—regardless of how effective it actually is. They merely need the ad industry to believe Facebook has an advantage. 

Facebook violating the trust of users in its acquired companies (like Instagram and Whatsapp) may not be surprising. After all, it has a long trail of broken promises while paying lip service to privacy concerns. What’s troubling in this instance, however, is the position of Oculus in the VR/AR industry. Facebook is poised to shape the medium as a whole and may normalize mass user surveillance, as Google has already done with smartphones. We must make sure that doesn't happen.

Defending Fundamental Human Rights in All Realities

Strapping these devices to ourselves lets us enter a virtual world, but at a price—these companies enter our lives and have access to intimate details about us through biometric data. How we move and interact with the world offers insight, by proxy, into how we think and feel at the moment. Eye-tracking technology, often seen in cognitive science, is already being developed by Vive, which sets the stage for unprecedented privacy and security risks. If aggregated, those in control of this biometric data may be able to identify patterns that let them more precisely predict (or cause) certain behavior and even emotions in the virtual world. It may allow companies to exploit users' emotional vulnerabilities through strategies that are difficult for the user to perceive and resist. What makes the collection of this sort of biometric data particularly frightening, is that unlike a credit card or password, it is information about us we cannot change. Once collected, there is little users can do to mitigate the harm done by leaks or data being monetized with additional parties. 

Threats to our privacy don’t stop there. A VR/AR setup will also be densely packed with cameras, microphones, and myriad other sensors to help us interact with the real world—or at least not crash into it. That means information about your home, your office, or even your community is collected and potentially available to the government. Even if you personally never use this equipment, sharing a space with someone who may puts your privacy at risk. Without meaningful user consent and restrictions on the collection, a menacing future may take shape where average people using AR further proliferate precise audio and video surveillance in public and private spaces. It’s not hard to imagine these raw data feeds integrating with the new generations of automatic mass surveillance technology such as face recognition.

Companies like Oculus need to do more than “think about privacy”. Industry leaders need to commit to the principles of privacy by design, security, transparency, and data minimization. By default, only data necessary to core functions of the device or software should be collected; even then, developers should utilize encryption, delete data as soon as reasonably possible, and have this data stay on local devices. Any collection or use of information beyond this, particularly when shared with additional parties, must be opt-in with specific, freely given user consent. For consent to be freely given, companies should provide an alternative option so the user has the ability to choose. Effective safeguards must also be in place to ensure companies are honoring their promises to users, and to mitigate Cambridge-Analytica-type data scandals from third-party developers. Companies should, for example, carry out a Data Protection Impact Assessment to help them identify and minimize data protection risks when the processing can likely result in a high risk to individuals. While we encourage these companies to compete on privacy, it seems unlikely most tech giants would do so willingly. Privacy must also be the default on all devices, not a niche or premium feature.  

We all need to keep the pressure on state legislatures and Congress to adopt strong comprehensive consumer privacy laws in the United States to control what big tech can get away with. These new laws must not preempt stronger state laws, they must provide users’ with a private right of action, and they should not include “data dividends” or pay-for-privacy schemes.

Antitrust enforcers should also take note of yet another broken promise about privacy, and think twice before allowing Facebook to acquire data-rich companies like Oculus in the future. Mergers shouldn’t be allowed based on promises to keep the user data from acquired companies separate from Facebook’s other troves of user data when Facebook has broken such promises so many times before.

The future of privacy in VR/AR will depend on swift action now, while the industry is still budding. Developers need to be critical of the technology and information they utilize, and how they can make their work more secure and transparent. Enthusiasts and reviewers should prioritize open and privacy-conscious devices while they are only entertainment accessories. Activists and researchers must create a future where AR and VR work in the best interests of the users, and society overall. 

Left unchecked, we fear VR/AR development will follow the trail left by smartphones and IoT. Developers, users, and the government must ensure it does not ride its hype into an inescapable, insecure, proprietary, and privacy-invasive ecosystem. The hardware and software may go a long way towards fulfilling long-promised aspects of technology, but it must not do so while trampling on our human rights.

Rory Mir

Courts Shouldn’t Stifle Patent Troll Victims’ Speech

3 months ago

In the U.S., we don’t expect or allow government officials – including judges--to be speech police. Courts are allowed to restrain speech only in the rarest circumstances, subject to strict limitations. So we were troubled to learn that a judge in Missouri has issued an order stifling the speech of a small company that’s chosen to speak out about a patent troll lawsuit that was filed against it.

Mycroft AI, a company with nine employees that makes open-source voice technology, published a blog post on February 5 describing how it had been threatened by a patent troll called Voice Tech Corporation. Like all patent trolls, Voice Tech doesn’t offer any services or products. It simply owns patents, which it acquired through more than a decade of one-party argumentation with the U.S. Patent Office.  

Voice Tech’s two patents describe nothing more than using voice commands, together with a mobile device, to perform computer commands. It’s the basic statement of an idea, without any executable instructions, that’s been an idea in science fiction for more than 50 years. (In fact, Mycroft is named after a supercomputer in the Robert Heinlein novel The Moon Is a Harsh Mistress.) When Voice Tech used these patents to threaten and then sue [PDF] Mycroft AI, the company’s leadership decided not to pay the $30,000 that was demanded for these ridiculous patents. Instead, they fought back—and they asked their community for help.  

“Math isn’t patentable and software shouldn’t be either,” wrote Mycroft First Officer Joshua Montgomery in the blog.  “I don’t often ask this, but I’d like for everyone in our community who believes that patent trolls are bad for open source to re-post, link, tweet and share this post.  Please help us get the word out by sharing this post on Facebook, LinkedIn, Twitter, or email.” 

Montgomery also said that he’d “always wanted to be a troll hunter,” and that in his opinion, when confronted with matters like this, “it’s better to be aggressive and ‘stab, shoot and hang’ them, then dissolve them in acid.” He included a link to a piece of state legislation he opposed last year, where he’d used the same quote.  

That tough language got attention, and the story went viral on forums like reddit and Hacker News. The lawsuit, and the post, were also covered by tech publications like The Register and Techdirt. According to Mycroft, it led to an outpouring of much-needed support. 

The Court Steps In

According to Voice Tech, however, it led to harassment.  The company responded by asking the judge overseeing the case, U.S. District Judge Roseann Ketchmark of the Western District of Missouri, to intervene. Voice Tech suggested the post had led to both harassment of its counsel and a hacking attempt. Mycroft strenuously denied any harassment or hacking, and said it would “admonish and deny” any personal attacks.

Unfortunately, Judge Ketchmark not only accepted Voice Tech’s argument about harassment, but ordered Mycroft to delete portions of the blog post. What is worse, she ordered Mycroft to stop reaching out to its own open source community for support. Mycroft was specifically told to delete the request that “everyone in our community who believes that patent trolls are bad for open source” re-post and spread the news.

To be clear, if the allegations are true, Voice Tech’s counsel has a right to respond to those who are actually harassing him. This ruling, however, is deeply troubling. It does not appear as though there was sufficient evidence for the court to find that Mycroft’s colorful post led directly to the harassment—an essential (though not sufficient) requirement before prohibiting a party from sharing their opinions about a case.

But the public has a right to know what is happening in this case, and Mycroft has a right to share that information – even couched in colorful language. The fact that some members of the public may have responded negatively to the post, or even attempt to hack Voice Tech, doesn’t justify overriding that right without strong evidence showing a direct connection between Mycroft’s post and the harassment of Voice Tech’s counsel. 

Patent Trolls and Speech Police 

It gets worse. Apparently emboldened by its initial success, Voice Tech continues to press for more censorship.

In June, Mycroft published an update on its Mark II product. While the company anticipates delivery in 2021, Montgomery wrote that “progress is dependent on staffing and distractions like patent trolls,” and linked to a recent Techdirt article. Voice Tech quickly kicked into overdrive and wrote a note to Mycroft demanding the removal of a link, and a redaction:  

Voice Tech demands that Mycroft remove the link to the TECHDIRT article and redact the original article on the Mycroft Community Forum by no later than the close of business on Wednesday, July 22, 2020. If Mycroft fails to comply, Voice Tech will have no option but to file a motion for contempt with the Court.

Mycroft has removed the link. Voice Tech has also sought to censor third-party journalism about the case, like that published in Techdirt. 

It’s bad enough when small companies like Mycroft AI are subject to threats and litigation over patents that seem to be little more than science-fiction documents issued by a broken bureaucracy. But it’s even more outrageous when they can’t talk about it freely. No company should have to suffer in silence about the damage that patent trolls do to their businesses, to their communities, and to the public at large. We hope Judge Ketchmark clearly and quickly reconsiders and rescinds her troubling gag order. And we’re glad to see that Mycroft AI has been willing to put up legal fight against these clearly invalid patents. 

Joe Mullin

EFF Sues Texas A&M University Once Again to End Censorship Against PETA on Facebook and YouTube

3 months 1 week ago

This week, EFF filed suit to stop Texas A&M University from censoring comments by PETA on the university’s Facebook and YouTube pages.

In light of the COVID-19 pandemic, Texas A&M held its spring commencement ceremonies online, with broadcasts over Facebook and YouTube. Both the Facebook and YouTube pages had comment sections open to any member of the public—but administrators deleted comments that were associated with PETA’s high-profile campaign against the university’s muscular dystrophy experiments on golden retrievers and other dogs.

Where government entities such as Texas A&M open online forums to the public, the First Amendment prohibits them from censoring comments merely because they don’t like the content of the message or the viewpoint expressed. On top of that, censoring comments based on their message or viewpoint also violates the public’s First Amendment right to petition the government for redress of grievances.

Texas A&M knows this well, because this is not the first time we’ve sued them for censoring comments online. Back in 2018, EFF brought another First Amendment lawsuit against Texas A&M for deleting comments by PETA and its supporters about the university’s dog labs from the Texas A&M Facebook page. This year, in a big win for free speech, the school settled with PETA and agreed to stop deleting comments from its social media pages based on the comments’ messages.   

We are disappointed that Texas A&M has continued to censor comments by PETA’s employees and supporters without regard for the legally binding settlement agreement that it signed just six months ago, and hope that the federal court will make clear to the university once and for all that its censorship cannot stand.  

EFF is joined by co-counsel PETA Foundation and Rothfelder Falick LLP of Houston.

Related Cases: PETA v. Texas A&M
Naomi Gilens

Proctoring Apps Subject Students to Unnecessary Surveillance

3 months 1 week ago

With COVID-19 forcing millions of teachers and students to rethink in-person schooling, this moment is ripe for an innovation in learning. Unfortunately, many schools have simply substituted surveillance technology for real transformation. The use of proctoring apps—privacy-invasive software products that “watch” students as they take tests or complete schoolwork, has skyrocketed. These apps make a seductive promise: that schools can still rely on high-stakes tests, where they have complete control of a student's environment, even during remote learning. But that promise comes with a huge catch—these apps violate student privacy, negatively impact some populations, and will likely never fully stop creative students from outsmarting the system

No student should be forced to make the choice to either hand over their biometric data and be surveilled continuously or to fail their class. 

Through a series of privacy-invasive monitoring techniques, proctoring apps purport to determine whether a student is cheating. Recorded patterns of keystrokes and facial recognition supposedly confirm whether the student signing up for a test is the one taking it; gaze-monitoring or eye-tracking is meant to ensure that students don’t look off-screen too long, where they might have answers written down; microphones and cameras record students’ surroundings, broadcasting them to a proctor, who must ensure that no one else is in the room. Even if these features were successful at rooting out all cheating, which is extremely unlikely, what these tools amount to is compelled mass biometric surveillance of potentially millions of students, whose success will be determined not by correct answers, but by algorithms that decide whether or not their “suspicion” score is too high.

Much of this technology is effectively indistinguishable from spyware, which is malware that is commonly used to track unsuspecting users’ actions on their devices and across the Internet. It also has much in common with “bossware,” the invasive time-tracking and worker “productivity” software that has grown in popularity during the pandemic. EFF has campaigned against the pervasive use of both of these tools, demanding anti-virus companies recognize spyware more explicitly, and pushing employers to minimize their use of bossware. 

In addition to the invasive gathering of biometric data, proctoring services gather and retain personally identifiable information (PII) on students—sometimes through their schools, or by requiring students to input this data in order to register for an account. This can include full name, date of birth, address, phone number, scans of government-issued identity documents, educational institution affiliation, and student ID numbers. Proctoring companies also automatically gather data on student devices, regardless of whether they are school-issued devices or not. These collected logs can include records of operating systems, make and model of the device, as well as device identification numbers, IP addresses, browser type and language settings, software on the device and their versions, ISP, records of URLs visited, and how long students remain on a particular site or webpage. 

The companies retain much of what they gather, too—whether that’s documentation or video of bedroom scans. Some companies, like ProctorU, have no time limits on retention. Some of this information they share with third parties. And when student data is provided to the proctoring company by an educational institution, students are often left without a clear way to request that their data be deleted because they aren’t considered the data’s “owner.”

The leveraging of student data for commercial purposes isn’t the only risk to student privacy—as we’ve noted time and time again, gathering vast amounts of data on people is unwise given frequent breaches and subsequent data dumps. ProctorU found that out recently, when over 440,000 user records for their proctoring service were leaked on a hacker forum last month, including “email addresses, full names, addresses, phone numbers, hashed passwords, the affiliated organization, and other information.”

Aside from privacy concerns, these tools could easily penalize students who don’t have control over their surroundings, or those with less functional hardware or low-speed Internet. For students who don’t have home Internet access at all, they are locked out of testing altogether. They could also cause havoc for students who already have trouble focusing during tests, either because of a difficulty maintaining “eye contact” with their device, or simply because tests make them nervous. Software that assumes all students take tests the same way — in rooms that they can control, their eyes straight ahead, fingers typing at a routine pace—are undoubtedly leaving some students out. 

No student should be forced to make the choice to either hand over their biometric data and be surveilled continuously or to fail their class. A solution that requires students to surrender the security of their personal biometric information and give over video of their private spaces is no solution at all. 

Technology has opened up unprecedented opportunities for learning at a distance, and COVID-19 has forced us to use that technology on a scale never seen before. Yet schools must accept that they cannot have complete control of a student's environment when they are at home, nor should they want to. Proctoring apps fall short on multiple fronts: they invade students’ privacy, exacerbate existing inequities in educational outcomes, and can never fully match the control schools are used to enforcing in the test hall.

Educational institutions will need to adapt fundamentally to distance learning. New technologies and new teaching methods will be a part of that. Perhaps schools will need to reevaluate the need for closed book exams, or use fewer tests overall as compared to project-based assessments. Regardless, they should not rely on invasive proctoring apps to attempt to replace methods that only work in person. Surveillance tech has already crept into many areas of education, with some schools tracking students’ social media activity, others requiring students to use technology that collects and shares private data with third-party companies, and others implementing flawed facial recognition technology in the name of safety. While there are ways to fight back against some common school surveillance, it becomes increasingly difficult when that surveillance is directly tied to students’ evaluations and ultimate success. Teachers, parents, and students must not allow remote learning to become remote surveillance.

If you currently have or previously had a user account for ProctorU, check if your account was compromised in this breach at have i been pwned? and update your password.

Jason Kelley

EFF Calls on California Gov. Newsom To Mandate Data Privacy Protections for Californians Who Participate in COVID-19 Contact Tracing Programs

3 months 1 week ago
State, Private Companies Should Limit Data Collection and Retention

San Francisco—The Electronic Frontier Foundation (EFF) called on California Gov. Gavin Newsom and state lawmakers to ensure that all COVID-19 contact tracing programs include enforceable privacy protections that strictly limit how much and what kinds of data can be collected from Californians and prohibits using that data for anything other than reining in the pandemic.

More Californians will feel safe participating in efforts to trace transmission of the novel coronavirus if they know their information won’t be used to deport them or build data-rich profiles for data brokers and advertisers, EFF said this week in letters to Newsom and lawmakers. ACLU of California, Oakland Privacy, Media Alliance, Privacy Rights Clearinghouse, and Consumer Reports joined EFF in signing the letters.

“The success of contact tracing programs depends on participation by the public,” said EFF Legislative Activist Hayley Tsukayama. “Trust has been an issue—people are demanding protection over their private information. As a national leader in privacy and coronavirus policy-making, California should implement guardrails to prevent unwarranted privacy invasions and engender people’s trust that it’s OK to take part in contact tracing programs.”

EFF and its partners urged Newsom and lawmakers to bar the state, and privacy companies and contractors it works with to develop and implement manual and digital contact tracing programs, from collecting, retaining, using, or disclosing data except as necessary and proportionate to control the spread of COVID-19.

All contracts and agreements with outside companies should contain language that blocks them from using data for targeted advertising or other commercial purposes and combining participant data with any other data the companies may have. Data should be retained for no more than 30 days.

Contact-tracing programs should also be prohibited from discriminating against people on the basis of participation or nonparticipation. No one should be kept out of a workplace, school, or restaurant because they declined to participate in a contact-tracing program, privacy advocates said in the letters.

The state is stepping up contact tracing programs, announcing last week that Kaiser Permanente will donate $63 million to support the state’s work. Under its current contact tracing program, California Connected, public health workers will reach out to people who tested positive for COVID-19 via texts, phone calls, and emails. Those contacted are asked to give their names, ages, places they’ve been, and people they have been in contact with. The program pledges to keep the information confidential.

“Two bills currently before the California legislature—A.B. 1782 and A.B. 660—contain the important privacy protections we’re calling for,” said Tsukayama. “Ensuring people’s privacy at this time of uncertainty about our health and safety is the right thing to do. We urge Gov. Newsom to take the necessary steps to make our COVID-19 privacy protections a model for the rest of the country.”

For the letter:
https://www.eff.org/document/2020-08-coalition-letter-governorlegislature-contact-tracing-and-privacy

For more about contact tracing and privacy:
https://www.eff.org/deeplinks/2020/08/california-must-recognize-privacy-vital-public-health-efforts

Contact:  HayleyTsukayamaLegislative Activisthayleyt@eff.org
Karen Gullo

California Must Recognize That Privacy is Vital to Public Health Efforts

3 months 1 week ago

Californians have a constitutional right to privacy. There is no more important time to protect that right to privacy than during a crisis, such as the current pandemic. That is why EFF, along with the American Civil Liberties Union of California, Media Alliance, Oakland Privacy, Privacy Rights Clearinghouse, and Consumer Reports have called on the state’s political leaders to ensure that any program that asks Californians to share contact tracing information have strong privacy guardrails.

Being upfront and honest about what information contact tracing programs collect, how that information is used, and acting from the start to protect against abuses of that information can protect Californians at a vulnerable time. It can also increase trust in public health programs. The evidence is mounting that people don’t trust—and therefore do not wish to participate in—programs that have not respected privacy from the start. Our groups call on Governor Gavin Newsom, Senate President pro Tempore Toni Atkins, Assembly Speaker Anthony Rendon, and all members of the California Assembly and Senate to recognize that privacy protections are necessary to public health efforts.

Our coalition asks for the following four, common-sense protections:

  • A data minimization rule that ensures that the information a public or private entity collects actually serves a public health purpose.
  • A guarantee that any private entity working on a contact tracing program does not use the information for any other purpose—including, but not limited, to commercial purposes.
  • A prohibition from discriminating against people based on their participation—or nonparticipation—in a contact-tracing program, to protect those who cannot or do not want to participate in a data collection program, and to avoid programs with compulsory participation, which also risks declines in the quality of data.
  • A strong requirement to purge data from such programs when it is no longer useful—we are asking for a 30-day retention period. We would not, however, object to a narrowly-crafted exception from this data purge rule for a limited amount of aggregated and de-identified demographic data for public health purposes—for the sole purpose of tracking inequities in public health response to the crisis.

EFF also believes that the following additional guardrails are necessary for manual and automated contact tracing programs:

  • A ban on location tracing as a part of Tech-Assisted Contact Tracing. Location data (such as GPS and cell site location) is not sufficiently granular to identify whether two people were close enough together to transmit COVID-19. But it is sufficiently precise to show whether a person attended a protest, a worship service, or a hospital appointment. Thus, location tracking invades privacy without advancing public health.​ I​t might be possible to use Bluetooth-based proximity data to provide automated exposure notification in a privacy-preserving manner.​ ​But such systems must not use location data.
  • A ​prohibition against contact tracing by state and local law enforcement​. Many people will share less of their personal information if they fear the government will use it against them. This would frustrate containment of the outbreak.
  •  Effective enforcement​ of these privacy rights with a private right of action. Every person should be able to act as their own privacy enforcer. Private rights of action are a standard feature of legislation that protects people from governmental and corporate wrongdoing. Violations of privacy regarding contact tracing information should be no different.

EFF, along with many other privacy groups, strongly supports two bills currently in the California legislature—A.B. 1782 (Chau/Wicks) and A.B. 660 (Levine)—that include these and other important protections. We thank those authors for their work, and will continue to work to pass those bills in the legislature.

Respecting privacy can help establish much-needed trust in these programs, which will in turn increase their efficacy in addressing the current public health crisis. It is also simply the right thing to do.

Hayley Tsukayama
Checked
2 hours 21 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed