EFF, CDT Sue Government To Obtain Records About Federal Agencies Pulling Advertising From Platforms

2 months ago
Reducing Federal Dollars Based on Recipients’ Perceived Viewpoints Violates First Amendment

Washington, D.C.—The Electronic Frontier Foundation (EFF) and the Center for Democracy and Technology (CDT) today filed a Freedom of Information Act (FOIA) lawsuit against the government to obtain records showing whether federal agencies have cut their advertising on social media as part of President Trump’s broad attack on speech-hosting websites he doesn’t like.

In May Trump issued an executive order in retaliation against platforms that exercise their constitutionally-protected rights to moderate his and other users’ posts. The order gave executive departments and agencies 30 days to report their online ad spending to the Office of Management and Budget (OMB). The Department of Justice (DOJ) was charged with assessing the reports for alleged “viewpoint-based speech restrictions” that would make them “problematic vehicles” for government ads.

EFF and CDT seek records, reports, and communications about the spending review to shine a light on whether agencies have stopped or reduced online ad spending, or are being required to do so, based on a platform’s editorial decisions to flag or remove posts. Although the government has broad discretion to decide where it spends its advertising dollars, reducing or denying federal dollars they previously planned to spend on certain platforms based on officials’ perception of those entities' political viewpoints violates the First Amendment.

“The government can’t abuse its purse power to coerce online platforms to adopt the president’s view of what it means to be ‘neutral,’” EFF Staff Attorney Aaron Mackey. “The public has a right to know how the government is carrying out the executive order and if there’s evidence that the president is retaliating against platforms by reducing ad spending.”

On top of being unconstitutional, the ad spending reduction is also dangerous. Federal agencies advertise on social media to communicate important messages to the public, such as ads encouraging the use of masks to fight the spread of COVID-19 and warning of product recalls. Pulling online ads could have a potentially serious impact on the public’s ability to receive government messaging.

“The President continues his attempts to bully social media companies when he disagrees with their editorial choices. These threats are not only unconstitutional, but have real-life implications for internet users and voters alike,” said Avery Gardiner, CDT’s General Counsel. “CDT will continue to push for these documents to ensure the U.S. government isn’t using the power of its advertising purse to deter social media companies from fighting misinformation, voter suppression, and the stoking of violence on their platforms.”

EFF and CDT filed FOIA requests with OMB and DOJ in July for records; neither agency has responded or released responsive documents.

Trump issued the executive order aimed at online speech platforms a few days after Twitter marked his tweets for fact-checking because they were “potentially misleading.”

Private entities like social media companies, newspapers, and other digital media have a First Amendment right to edit and curate user-generated content on their sites. Trump’s order, which contains several provisions, including the one at issue in this lawsuit, would give the government power to punish platforms for their editorial decisions. It would strip them of protections provided by 47 U.S.C. § 230, often called Section 230, which grants online intermediaries broad immunity from liability arising from hosting users’ speech.

For the complaint:

For more on the Executive Order:

Contact:  AaronMackeyStaff Attorneyamackey@eff.org AveryGardinerGeneral Counsel, Center for Democracy and Technologyagardiner@cdt.org
Karen Gullo

Exposing Your Face Isn't a More Hygienic Way to Pay

2 months ago

A company called PopID has created an identity-management system that uses face recognition. Their first use case is as a system for in-store, point of sale payments using face recognition as authorization for payment.

They are promoting it as a tool for restaurants, claiming that it is pandemic-friendly because it is contactless.

Nonetheless, the PopID payment system is less secure than alternatives, unfriendly to privacy, and is likely riskier than other payment alternatives for anyone concerned about catching COVID-19. On top of these issues, PopID is pitching it as a screening tool for COVID-19 infection, another task that it's completely unsuited for.

Equities issues

It's important that payment systems not disadvantage cash payments, which have the best social equity. Many people are under-banked and in hard times such as these, many people use cash as a way to help them manage their budgets and spending. Cash is also the most privacy-friendly way to pay. As convenient as other systems are, and despite cash not being contactless, we need to protect people's ability to use cash1.

PopID is a charge-up-and-spend system. To lower their costs, PopID has its users charge up an account wn ith them using a credit card or debit card, and payments are deducted from that. Charge-and-spend systems are good for the store, and less good for the person using them; they amount to an interest-free loan that the consumer gives the merchant. This is no small thing: Starbucks, PayPal, and Walmart all have billions in interest-free loans from their customers. This further disadvantages people with budgets, as it requires them to give PopID money before it is spent and keep a balance in their system in anticipation of spending it.

PopID also requires their customers to have a smartphone for enrollment-by-selfie, which disadvantages those who don't have one.

To be fair, these issues are largely fixable. PopID could allow someone to enroll without a phone at any payment station. They could allow charge-up with cash, and they could allow direct charge2. But for now, the company does not offer these easy solutions.

Fitness to task

Looking beyond its potentially fixable perpetuation of systematic inequalities, it's important that a system actually do what it's intended to do. PopID is pitching it as a pandemic-friendly system, providing both contactless payments and as a COVID-19 screening device, using the camera as a temperature sensor. Neither of these is a good idea.

Temperature scanning with commodity cameras won't work

PopID promotes their system as a temperature scanning device for employees and customers alike. Temperature screening itself has limited benefit, as around half the people who have COVID-19 are asymptomatic.

Moreover, accurate temperature screening is expensive and hard. PopID is not the only organization to promote cheap face recognition with COVID-19 screening as the excuse. In reality, the cheap camera in a point-of-sale terminal is both inaccurate and intrusive as Jay Stanley of the ACLU describes in detail.

There's a wide range in the accuracy of temperature-scanning cameras, in normal human body temperature across a population, and even an individual's temperature based on time of day and their physical activities. Even the best cameras are finicky, not working accurately if people are wearing hats, glasses, or masks, and require the camera to view only one subject at a time.

Speeding up a sandwich shop line does help prevent COVID-19, because we know that spending too much time too close to other people is the primary mode of transmission. But, temperature scanning along with payment doesn't help people space themselves out or have shorter contact.

Face recognition raises COVID-19 risks

PopID pitches their system as good during the pandemic because it is contactless. Yet it is worse than payment alternatives.

PopID's web site shows a picture of a payment terminal, with options to use contactless payment systems such as Apple Pay, Google Pay, and Samsung Pay. Presumably, any contactless credit card could be used. Additionally, a barcode system like the one Starbucks uses is contactless.

PopID's point of sale terminal

Any of these contactless payment alternatives are much better than PopID from a public health standpoint because they don't require someone to remove their mask. The LA Times article comments parenthetically, "(The software struggles at recognizing faces with masks.)"

Indeed, any contactless payment system has less contact than using cash, yet even cash is low-risk. Almost all COVID-19 transmission is through breathing in virus particles in droplets or aerosols, not from fomites that we touch. Moreover, cash is easy to wash in soapy water.

This is a big deal for a supposedly pandemic-friendly system. The most recent restaurant-based superspreading event in the news is particularly relevant. A person in South Korea sat for two hours in a coffee shop under the air conditioning, and spread the disease to twenty-seven other people, who in turn spread it to twenty-nine other people, for a total of fifty-six people. And yet, none of the mask-wearing employees got the virus.

This is particularly relevant to PopID; a contactless system that makes someone take off a mask endangers the other customers. Ironically, if a customer sees a store using PopID, they better be wearing a mask because PopID is requiring them to come off momentarily. Or they could just shop somewhere else.


PopID brings in new security risks that do not exist in other systems. They have the user's payment information (for charging up the payment store), their phone number (it's part of registration), name, and of course the selfie that's used for face recognition. There's no reason to suppose they're any worse than the cloud services that inevitably lose people information, but no reason to think they're better. Thus, we should assume that eventually a hacker's going to get all that information.

However, being a payment system, there is the obvious additional risk of fraud. PopID says, "Your Face now becomes your singular, ultra-secure 'digital token' across all PopID transactions and devices," yet that can't possibly be so.

Face recognition systems are well-known to be inaccurate as NIST recently showed, particularly with Black, Indigenous, Asian and other People of Color, women, and also Trans or nonbinary people. False positives are common and in a payment system, a false positive means a false charge. PopID says they will confirm any match through the verification process of asking someone their name. To be fair, this is not a bad secondary check but is hardly "ultra-secure." Moreover, it requires every PopID customer to tell the whole store their name (or use a PopID pseudonym).

Lastly, PopID doesn't say how they'll permit someone to dispute charges, an important factor since the credit card industry is regulated with excellent consumer protection. In the event of fraud, it's much easier to be issued a new credit card than a new face.

The end result is that PopID's pay-by-face is less secure than using a contactless card, and less secure than cash.


PopID is an incipient privacy nightmare. The obvious privacy issues of an unregulated payment system that knows where your face has been is only the start of the problem. The LA Times writes:

But [CEO of PopID, John] Miller’s vision for a face-based network goes beyond paying for lunch or checking in to work. After users register for the service, he wants to build a world where they can “use it for everything: at work in the morning to unlock the door, at a restaurant to pay for tacos, then use it to sign in at the gym, for your ticket at the Lakers game that night, and even use it to authenticate your age to buy beers after.”

“You can imagine lots of things that you can do when you have a big database of faces that people trust,” Miller said.

Nothing more needs to be said. PopID as a payment system is a stalking horse for a face-surveillance panopticon and salable database of trusted faces.


PopID is less secure and less private than alternative forms of payment, contactless or not. It brings with it a lot of social equity issues that negatively impact marginalized communities. Moreover, any store using PopID and thus requiring other people to remove their masks to pay is exposing you to COVID-19 that you would not otherwise be exposed to.

Most alarmingly, it is also an insecure for-profit surveillance system building a database of you, your face, your purchases, your movements, and your habits.

  1. This is a complex issue in that we all intuitively think of money as dirty, and in pandemic times, this is even more on everyone's mind. However, the evidence at this writing (September, 2020) is that exposure through touch is possible but not common, while transmission through breath is the way almost all transmission occurs. ↩︎

  2. Yes, these potential fixes are in tension with each other. If someone using the system wants to be cash-only, they're going to have to have a pre-paid balance in the system. In the other direction, direct-charge system has higher costs to PopID, but that's their business issue, and not the customer's. ↩︎

Jon Callas

A Look-Back and Ahead on Data Protection in Latin America and Spain

2 months 1 week ago

We're proud to announce a new updated version of The State of Communications Privacy Laws in eight Latin American countries and Spain. For over a year, EFF has worked with partner organizations to develop detailed questions and answers (FAQs) around communications privacy laws. Our work builds upon previous and ongoing research of such developments in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. We aim to understand each country’s legal challenges, in order to help us spot trends, identify the best and worst standards, and provide recommendations to look ahead. This post about data protection developments in the region is one of a series of posts on the current State of Communications Privacy Laws in Latin America and Spain. 

As we look back at the past ten years in data protection, we have seen considerable legal progress in granting users’ control over their personal lives. Since 2010, sixty-two new countries have enacted data protection laws, giving a total of 142 countries with data protection laws worldwide. In Latin America, Chile was the first country to adopt such a law in 1999, followed by Argentina in 2000. Several countries have now followed suit: Uruguay (2008), Mexico (2010), Peru (2011), Colombia (2012), Brazil (2018), Barbados (2019), and Panama (2019). While there are still different privacy approaches, data protection laws are no longer a purely European phenomenon.

Yet, contemporary developments in European data protection law continue to have an enormous influence in the region—in particular, the EU's 2018 General Data Protection Regulation (GDPR). Since 2018, several countries, including Barbados and Panama have led the way in adopting GDPR-inspired laws in the region, promising the beginning of a new generation of data protection legislation. In fact, the privacy protections of Brazil’s new GDPR-inspired law took effect last week, on September 18, after the Senate pushed back on a delaying order from President Jair Bolsonaro.

But when it comes to data protection in the law enforcement context, few countries have adopted the latest steps of the European Union. The EU Police Directive, a law on the processing of personal data for police forces, has not yet become a Latin American phenomenon. Mexico is the only country with a specific data protection regulation for the public sector. In doing so, countries in the Americas are missing a crucial opportunity to strengthen their communications privacy safeguards with rights and principles common to the global data protection toolkit.

New GDPR-Inspired Data Protection Laws
BrazilBarbados, and Panama have been the first countries in the region to adopt GDPR-inspired data protection laws. Panama’s law, approved in 2019, will enter into force in March 2021. 

Brazil’s law has faced an uphill battle. The provisions creating the oversight authority came into force in December 2018, but it took the government one and a half years to introduce a decree implementing its structure. The decree, however, will only have legal force when the President of the Board is officially appointed and approved by the Senate. No appointment has been made as of the publication of this post. For the rest of the law, February 2020 was the original deadline to enter into force. This was later changed to August 2020. The law was then further delayed to May 2021 through an Executive Act issued by President Bolsonaro. Yet, in a surprising positive twist, Brazil's Senate stopped President Bolsonaro’s deferral in August. That means the law is now in effect, except for the penalties' section which have been deferred again, to August 2021. 

Definition of Personal Data 
Like the GDPR, Brazil and Panama's laws include a comprehensive definition of personal data. It includes any information concerning an identified or identifiable person. The definition of personal data in Barbados’s law has certain limitations. It only protects data which relates to an individual who can be identified “from that data; or from that data together with other information which is in the possession of or is likely to come into the possession of the provider.” Anonymized data in Brazil, Panama, and Barbados falls outside the scope of the law. There are also variations in how these countries define anonymized data.

Panama defines it as data that cannot be re-identified by reasonable means. However, the law doesn't set explicit parameters to guide this assessment. Brazil’s law makes it clear that anonymized data will be considered personal data if the anonymization process is reversed using exclusively the provider’s own means, or if it can be reversed with reasonable efforts. The Brazilian law defines objective factors to determine what’s reasonable such as the cost and time necessary to reverse the anonymization process, according to the technologies available, and exclusive use of the provider's own means.  These parameters affect big tech companies with extensive computational power and large collections of data, which will need to determine if their own resources could be used to re-identify anonymized data. This provision should not be interpreted in a way that ignores scenarios where the sharing or linking of anonymized data with other data sets, or publicly available information, leads to the re-identification of the data.

Right to Portability 
The three countries grant users the right to portability—a right to take their data from a service provider and transfer it elsewhere. Portability adds to the so-called ARCO (Access, Rectification, Cancellation, and Opposition) rights—a set of users’ rights that allow them to exercise control over their own personal data.

Enforcers of portability laws will need to make careful decisions about what happens when one person wants to port away data that relates both to them and another person, such as their social graph of contacts and contact information like phone numbers. This implicates the privacy and other rights of more than one person. Also, while portability helps users leave a platform, it doesn’t help them communicate with others who still use the previous one. Network effects can prevent upstart competitors from taking off. This is why we also need interoperability to enable users to interact with one another across the boundaries of large platforms. 

Again, different countries have different approaches. The Brazilian law tries to solve the multi-person data and interoperability issues by not limiting the "ported data'' to data the user has given to a provider. It also doesn't detail the format to be adopted. Instead, the data protection authority can set the standards among others for interoperability, security, retention periods, and transparency. In Panama, portability is a right and a principle. It is one of the general data protection principles that guide the interpretation and implementation of their overarching data protection law. As a right, it resembles the GDPR model. The user has the right to receive a copy of their personal data in a structured, commonly used, and machine-readable format. The right applies only when the user has provided their data directly to the service provider  and has given their consent or when the data is needed for the execution of a contract. Panama’s law expressly states that portability is “an irrevocable right” that can be requested at any moment. 

Portability rights in Barbados are similar to those in Panama. But, like the GDPR, there are some limitations. Users can only exercise their rights to directly port their data from one provider to another when technically feasible. Like Panama, users can port data that they have provided themselves to the providers, and not data about themselves that other users have shared.  

Automated Decision-Making About Individuals
Automated decision-making systems are making continuous decisions about our lives to aid or replace human decision making. So there is an emerging GDPR-inspired right not to be subjected to solely automated decision-making processes that can produce legal or similarly significant effects on the individual. This new right would apply, for example, to automated decision-making systems that use “profiles” to predict aspects of our personality, behavior, interests, locations, movements, and habits. With this new right, the user can contest the decisions made about them, and/or obtain an explanation about the logic of the decision. Here, too, there are a few variations among countries. 

Brazilian law establishes that the user has a right to review decisions affecting them that are based solely on the automated processing of personal data. These include decisions intended to define personal, professional, consumer, or credit profiles, or other traits of someone’s personality. Unfortunately, President Bolsonaro vetoed a provision requiring human review in this automated-decision-making. On the upside, the user has a right to request the provider to disclose information on the criteria and procedures adopted for automated decision-making, though unfortunately there is an exception for trade and industrial secrets.

In Barbados, the user has the right to know, upon request to the provider, about the existence of decisions based on automated processing of personal data, including profiling. As in other countries, this includes access to information about the logic involved and the envisaged consequences on them. Barbados users also have the right not to be subject to automated decision-making processes without human involvement, and to automated decisions that will produce legal or similarly significant effects on the individual, including profiling. There are exceptions for when automated decisions are: necessary for entering or performing a contract between the user and the provider; authorized by law; or based on user consent. Barbados has defined consent similar to the GDPR’s definition. That means there must be a freely given, specific, informed, and unambiguous indication of the user's wishes to the processing of their personal data. The user has the ability to change their mind. 

Panama law also grants users the right not to be subject to a decision based solely on automated processing of their personal data, without human involvement, but this right only applies when the process produces negative legal effects concerning the user or detrimentally affects the users’ rights. As in Barbados, Panama allows automated decisions that are necessary for entering or performing a contract, based on the user’s consent, or permitted by law. But Panama defines “consent” in a less user-protective manner: when a person provides a “manifestation” of their will.

Legal Basis for the Processing of Personal Data
It is important for data privacy laws to require service providers to have a valid lawful basis in order to process personal data, and to document that basis before starting the processing. If not, the processing will be unlawful. Data protection regimes, including all principles and user's rights, must apply regardless of whether consent is required or not.

Panama’s new law allows three legal bases other than consent: to comply with a contractual obligation, to comply with a legal obligation, or as authorized by a particular law. Brazil and Barbados set out ten legal bases for personal data processing—four more than the GDPR, with consent as only one of them. Brazilian and Barbados law seeks to balance this approach by providing users with clear and concise information about what providers do with their personal data. It also grants users the right to object to the processing of their data, which allows users to stop or prevent processing. 

Data Protection in the Law Enforcement Context
Latin America lags on a comprehensive data protection regime that applies not just to corporations, but also to public authorities when processing personal data for law enforcement purposes. The EU, on the other hand, has adopted not just the GDPR but also the EU Police Directive, a law that regulates the processing of personal data for police forces. Most Latam data protection laws exempt law enforcement and intelligence activities from the application of the law. However, in Colombia, some data protection rules apply to the public sector. That nation’s GDPL applies to the public sector, with exceptions for national security, defense, anti-money-laundering regulations, and intelligence. The Constitutional Court has stated that these exceptions are not absolute exclusions from the law’s application, but an exemption to just some provisions. Complementary statutory law should regulate them, subject to the proportionality principle. 

Spain has not implemented the EU’s Police Directive yet. As a result, personal data processing for law enforcement activities remains held to the standards of the country's previous data protection law. Argentina's and Chile's laws do apply to law enforcement agencies, and Mexico has a specific data protection regulation for the public sector. But Peru and Panama exclude law enforcement agencies from the scope of their data protection laws. Brazil's law creates an exception to personal data processing solely for public safety, national security, and criminal investigations. Still, it lays down that specific legislation has to be approved to regulate these activities. 

Recommendations and Looking Ahead
Communication privacy has much to gain with the intersection of its traditional inviolability safeguards and the data protection toolkit. That intersection helps entrench international human rights standards applicable to law enforcement access to communications data. The principles of data minimization and purpose limitation in the data protection world correlate with the necessity, adequacy, and proportionality principles under international human rights law. They are necessary to curb massive data retention or dragnet government access to data. The idea that any personal data processing requires a legitimate basis upholds the basic tenets of legality and legitimate aim to place limitations on fundamental rights. Law enforcement access to communications data must be clearly and precisely prescribed by law. No other legitimate basis than the compliance with a legal obligation is acceptable in this context. 

Data protection transparency and information safeguards reinforce a user’s right to a notification when government authorities have requested their data. European courts have asserted this right stems from privacy and data protection safeguards. In the Tele2 Sverige AB and Watson cases, the EU Court of Justice (CJEU) held that "national authorities to whom access to the retained data has been granted must notify the persons affected . . . as soon as that notification is no longer liable to jeopardize the investigations being undertaken by those authorities." Before that, in Szabó and Vissy v. Hungary, the European Court of Human Rights (ECHR) had declared that notifying users of surveillance measures is also inextricably linked to the right to an effective remedy against the abuse of monitoring powers.

Data protection transparency and information safeguards can also play a key role in fostering greater insight into companies' and governments' practices when it comes to requesting and handing over users' communications data. In collaboration with EFF, many Latin American NGOs have been pushing Internet Service Providers to publish their law enforcement guidelines and aggregate information on government data requests. We've made progress over the years, but there's still plenty of room for improvement. When it comes to public oversight, data protection authorities should have the legal mandate to supervise personal data processing by public entities, including law enforcement agencies. They should be impartial and independent authorities, conversant in data protection and technology, and have adequate resources in exercising the functions assigned to them.

There are already many essential safeguards in the Latam region. Most countries’ constitutions have explicitly recognized privacy as a fundamental right, and most have adopted data protection laws.  Each constitution recognized a general right to private life or intimacy or a set of multiple, specific rights: a right to the inviolability of communications; an explicit data protection right (Chile, Mexico, Spain); or “habeas data” (Argentina, Peru, Brazil) as either a right or legal remedy. (In general, habeas data protects the right of any person to find out what data is held about themselves.) And, most recently, a landmark ruling of Brazil’s Supreme Court has recognized data protection as a fundamental right drawn from the country’s Constitution.

Across our work in the region, our FAQs help to spot loopholes, flag concerning standards, or highlight pivotal safeguards (or lack thereof). It's clear that the rise of data protection laws has helped secure user privacy across the region: but more needs to be done. Strong data protection rules that apply to law enforcement activities would enhance communication privacy protections in the region. More transparency is urgently needed, both in how the regulations will be implemented, and what additional work private companies and the public sector are taking to pro-actively protect user data.

We invite everyone to read these reports and reflect on what work we should champion and defend in the days ahead, and what still needs to be done.

Katitza Rodriguez

Three Interactive Tools for Understanding Police Surveillance

2 months 1 week ago

This post was written by Summer 2020 Intern Jessica Romo, a student at the Reynolds School of Journalism at University of Nevada, Reno. 

As law enforcement and government surveillance technology continues to become more and more advanced, it has also become harder for everyday people to avoid. Law enforcement agencies all over the United States are using body-worn cameras, automated license plate readers, drones, and much more—all of which threat people's right to privacy. But it's often difficult for people to even become aware of what technology is being used where they live. 

The Electronic Frontier Foundation has three interactive tools that help you learn about the new technologies being deployed around the United States and how they impact you: the Atlas of Surveillance, Spot the Surveillance, and Who Has Your Face?

The Atlas of Surveillance

The Atlas of Surveillance is a database and map that will help you understand the magnitude of surveillance at the national level, as well as what kind of technology is used locally where you live.   

Developed in partnership with the University of Nevada, Reno's Reynolds School of Journalism, the Atlas of Surveillance is a dataset with more than 5,500 points of information on technology surveillance used by law enforcement agencies across the United States. Journalism students and EFF volunteers gathered online research, such as news articles and government records, on 10 common surveillance technologies and two different types of surveillance command centers. 

By clicking any point on the map, you will get the name of an agency and a description of the technology. If you toggle the interactive legend, you can see how each technology is spreading across the country. You can also search a simple-to-use text version of the database of all the research, including links to news articles or documents that confirm the existence of the technology in that region. 

Who Has Your Face?

Half of all adults in the United States likely have their image in a law enforcement facial recognition database, according to a 2016 report from the Center on Privacy & Technology at Georgetown Law. Today, that number is probably higher. But what about your face? 

Face recognition is a form of biometric surveillance that uses software to automatically identify or track someone based on their physical characteristics. People are subjected to face recognition in hundreds of cities around the country. Government has a number of uses for the technology, from screening passengers at airports to identifying protesters caught on camera. 

Who Has Your Face? is a short quiz that allows a user to see which government agencies can access their official photographs (such as a driver's license or a mugshot) and whether investigators can apply face recognition technology to those photos.

The site doesn't collect personal information, but it does ask five basic questions, such as whether you have a driver's license and if so, what state issued it. Based on your choice, the system will automatically generate a list of agencies that could potentially access your images. 

It also includes a resource page listing what each state’s DMV and other agencies can access. 

Spot the Surveillance

If you drove past an automated license plate reader, would you know what it looks like? Ever look closely at the electronic devices carried by police officers? Most of the time, people might not even notice when they've walked into the frame of surveillance technology. 

Spot the Surveillance is a virtual reality experience where you will learn how to identify surveillance technology that local law enforcement agencies use.

The experience takes place in a San Francisco neighborhood, where a resident is having an interaction with two police officers. You'll look in every direction to find seven technologies being deployed. After you find each technology, you’ll learn more about how it operates and how it’s used by police. Afterwards, you can try your new skills to identify these technologies in real life in your hometown.  

Spot the Surveillance works with most VR headsets, but is also available to use on a regular web browser. There’s also a Spanish version.

Get To Know The Surveillance That’s Getting To Know You

EFF has fought back against surveillance for decades, but we need your help. What other interactive tools would you like to see? Let us know on social media or by emailing info@eff.org, so we can continue to help you protect your privacy. 

Dave Maass

Plaintiffs Continue Effort to Overturn FOSTA, One of the Broadest Internet Censorship Laws

2 months 1 week ago

Special thanks to legal intern Ross Ufberg, who was lead author of this post.

A group of organizations and individuals are continuing their fight to overturn the Allow States and Victims to Fight Online Sex Trafficking Act, known as FOSTA, arguing that the law violates the Constitution in multiple respects.

In legal briefs filed in federal court recently, plaintiffs Woodhull Freedom Foundation, Human Rights Watch, the Internet Archive, Alex Andrews, and Eric Koszyk argued that the law violates the First and Fifth Amendments, and the Constitution’s prohibition against ex post facto laws. EFF, together with Daphne Keller at the Stanford Cyber Law Center, as well as lawyers from Davis Wright Tremaine and Walters Law Group, represent the plaintiffs.

How FOSTA Censored the Internet

FOSTA led to widespread Internet censorship, as websites and other online services either prohibited users from speaking or shut down entirely. FOSTA accomplished this comprehensive censorship by making three major changes in law:

First, FOSTA creates a new federal crime for any website owner to “promote” or “facilitate” prostitution, without defining what those words mean. Organizations doing educational, health, and safety-related work, such as The Woodhull Foundation, and one of the leaders of the Sex Workers Outreach Project USA (SWOP USA), fear that prosecutors may interpret advocacy on behalf of sex workers as the “promotion” of prostitution. Prosecutors may view creation of an app that makes it safer for sex workers out in the field the same way. Now, these organizations and individuals—the plaintiffs in the lawsuit—are reluctant to exercise their First Amendment rights for fear of being prosecuted or sued.

Second, FOSTA expands potential liability for federal sex trafficking offenses by adding vague definitions and expanding the pool of enforcers. In addition to federal prosecution, website operators and nonprofits now must fear prosecution from thousands of state and local prosecutors, as well as private parties. The cost of litigation is so high that many nonprofits will simply cease exercising their free speech, rather than risk a lawsuit where costs can run into the millions, even if they win.

Third, FOSTA limits the federal immunity provided to online intermediaries that host third-party speech under 47 U.S.C. § 230 (“Section 230”). This immunity has allowed for the proliferation of online services that host user-generated content, such as Craigslist, Reddit, YouTube, and Facebook. Section 230 helps ensure that the Internet supports diverse and divergent viewpoints, voices, and robust debate, without every website owner needing to worry about being sued for their users’ speech. The removal of Section 230 protections resulted in intermediaries shutting down entire sections or discussion boards for fear of being subject to criminal prosecution or civil suits under FOSTA.

How FOSTA Impacted the Plaintiffs

In their filings asking a federal district court in Washington, D.C. to rule that FOSTA is unconstitutional, the plaintiffs describe how FOSTA has impacted them and a broad swath of other Internet users. Some of those impacts have been small and subtle, while others have been devastating.

Eric Koszyk is a licensed massage therapist who heavily relied on Craigslist’s advertising platform to find new clients and schedule appointments. Since April 2018, it’s been hard for Koszyk to supplement his families’ income with his massage business. After Congress passed FOSTA, Craigslist shut down the Therapeutic Services of its website, where Koszyk had been most successful at advertising his services. Craigslist further prohibited him from posting his ads anywhere else on its site, despite the fact that his massage business is entirely legal. In a post about FOSTA, Craigslist said that they shut down portions of their site because the new law created too much risk. In the two years since Craigslist removed its Therapeutic Services section, Koszyk still hasn’t found a way to reach the same customer base through other outlets. His income is less than half of what it was before FOSTA.

Alex Andrews, a national leader in fighting for sex worker rights and safety, has had her activism curtailed by FOSTA. As a board member of SWOP USA, Andrews helped lead its efforts to develop a mobile app and website that would have allowed sex workers to report violence and harassment. The app would have included a database of reported clients that workers could query before engaging with a potential client, and would notify others nearby when a sex worker reported being in trouble. When Congress passed FOSTA, Alex and SWOP USA abandoned their plans to build this app. SWOP USA, a nonprofit, simply couldn’t risk facing prosecution under the new law.

FOSTA has also impacted a website that Andrews helped to create. The website Rate That Rescue is “a sex worker-led, public, free, community effort to help everyone share information” about organizations which aim to help sex workers leave their field or otherwise assist them. The website hosts ratings and reviews. But without the protections of Section 230, in Andrews’ words, the website “would not be able to function” because of the “incredible liability for the content of users’ speech.” It’s also likely that Rate That Rescue’s creators face criminal liability under FOSTA’s new criminal provisions because the website aims to make sex workers’ lives and work safer and easier. This could be considered to violate FOSTA’s provisions that make it a crime to promote or facilitate prostitution.

 Woodhull Freedom Foundation advocates for sexual freedom as a human right, which includes supporting the health, safety, and protection of sex workers. Each year, Woodhull organizes a Sexual Freedom Summit in Washington, DC, with the purpose of bringing together educators, therapists, legal and medical professionals, and advocacy leaders to strategize on ways to protect sexual freedom and health. There are workshops devoted to issues affecting sex workers, including harm reduction, disability, age, health, and personal safety. This year, COVID-19 has made an in person meeting impossible, so Woodhull is livestreaming some of the events. Woodhull has had to censor their ads on Facebook, and modify their programming on YouTube, just to get past those companies’ heightened moderation policies in the wake of FOSTA.

The Internet Archive, a nonprofit library that seeks to preserve digital materials, faces increased risk because FOSTA has dramatically increased the possibility that a prosecutor or private citizen might sue it simply for archiving newly illegal web pages. Such a lawsuit would be a real threat for the Archive, which is the Internet’s largest digital library.

FOSTA puts Human Rights Watch in danger, as well. Because the organization advocates for the decriminalization of sex work, they could easily face prosecution for “promoting” prostitution.

Where the Legal Fight Against FOSTA Stands Now

With the case now back in district court after the D.C. Circuit Court of Appeals reversed the lower court’s decision to dismiss the suit, both sides have filed motions for summary judgment. In their filings, the plaintiffs make several arguments for why FOSTA is unconstitutional.

First, they argue that FOSTA is vague and overbroad. The Supreme Court has said that if a law “fails to give ordinary people fair notice of the conduct it prohibits,” it is unconstitutional. That is especially true when the vagueness of the law raises special First Amendment concerns.

FOSTA does just that. The law makes it illegal to “facilitate” or “promote” prostitution without defining what that means. This has led to, and will continue to lead to, the censorship of speech that is protected by the First Amendment. Organizations like Woodhull, and individuals like Andrews, are already curbing their own speech. They fear their advocacy on behalf of sex workers may constitute “promotion” or “facilitation” of prostitution.

The government argues that the likelihood of anyone misconstruing these words is remote. But some courts interpret “facilitate” to simply mean make something easier. By this logic, anything that plaintiffs like Andrews or Woodhull do to make sex work safer, or make sex workers’ lives easier, could be considered illegal under FOSTA.

Second, the plaintiffs argue that FOSTA’s Section 230 carveouts violate the First Amendment. A provision of FOSTA eliminates some Section 230 immunity for intermediaries on the Web, which means anybody who hosts a blog where third parties can comment, or any company like Craigslist or Reddit, can be held liable for what other people say.

As the plaintiffs show, all the removal of Section 230 immunity really does is squelch free speech. Without the assurance that a host won’t be sued for what a commentator or poster says, those hosts simply won’t allow others to express their opinions. As discussed above, this is precisely what happened once FOSTA passed.

Third, the plaintiffs argued that FOSTA is not narrowly tailored to the government’s interest in stopping sex trafficking. Government lawyers say that Congress passed FOSTA because it was concerned about sex trafficking. The intent was to roll back Section 230 in order to make it easier for victims of trafficking to sue certain websites, such as Backpage.com. The plaintiffs agree with Congress that there is a strong public interest in stopping sex trafficking. But FOSTA doesn’t accomplish those goals—and instead, it sweeps up a host of speech and advocacy protected by the First Amendment.

There’s no evidence the law has reduced sex trafficking. The effect of FOSTA is that traffickers who once posted to legitimate online platforms will go even deeper underground—and law enforcement will have to look harder to find them and combat their illegal activity.

Finally, FOSTA violates the Constitution’s prohibition on criminalizing past conduct that was not previously illegal. It’s what is known as an “ex post facto” law. FOSTA creates new retroactive liability for conduct that occurred before Congress passed the law. During the debate over the bill, the U.S. Department of Justice even admitted this problem to Congress—but the DOJ later promised to “pursu[e] only newly prosecutable criminal conduct that takes place after the bill is enacted.” The government, in essence, is saying to the courts, “We promise to do what we say the law means, not what the law clearly says.” But the Department of Justice cannot control the actions of thousands of local and state prosecutors—much less private citizens who sue under FOSTA based on conduct that occurred long before it became law.

* * *

FOSTA sets out to tackle the genuine problem of sex trafficking. Unfortunately, the way the law is written achieves the opposite effect: it makes it harder for law enforcement to actually locate victims, and it punishes organizations and individuals doing important work. In the process, it does irreparable harm to the freedom of speech guaranteed by the First Amendment. FOSTA silences diverse viewpoints, makes the Internet less open, and makes critics and advocates more circumspect. The Internet should remain a place where robust debate occurs, without the fear of lawsuits or jail time.

Related Cases: Woodhull Freedom Foundation et al. v. United States
Aaron Mackey

EFF Joins Coalition Urging Senators to Reject the EARN IT Act

2 months 1 week ago

Recently, EFF joined the Center for Democracy and Technology (CDT) and 26 other organizations to send a letter to the Senate opposing the EARN IT Act (S. 3398), asking that the Senate oppose fast tracking the bill, and to vote NO on passage of the bill.

As we have written many times before, if passed, the EARN IT Act would threaten free expression, harm innovation, and jeopardize important security protocols. We were pleased to join with other organizations that share our concerns about this harmful bill.

We sent our letter, but your Senators need to hear from you. Contact your Senators and tell them to oppose the EARN IT Act.



India McKinney

What the *, Nintendo? This in-game censorship is * terrible.

2 months 1 week ago

While many are staying at home and escaping into virtual worlds, it's natural to discuss what's going on in the physical world. But Nintendo is shutting down those conversations with its latest Switch system update (Sep. 14, 2020) by adding new terms like COVID, coronavirus and ACAB to its censorship list for usernames, in-game messages, and search terms for in-game custom designs (but not the designs themselves).

While we understand the urge to prevent abuse and misinformation about COVID-19, censoring certain strings of characters is a blunderbuss approach unlikely to substantially improve the conversation. As an initial matter, it is easily circumvented: while our testing, shown above, confirmed that Nintendo censored coronavirus, COVID and ACAB, but does not restrict substitutes like c0vid or a.c.a.b., nor corona and virus, when written individually.

More importantly, it’s a bad idea, because these terms can be part of important conversations about politics or public health. Video games are not just for gaming and escapism, but are part of the fabric of our lives as a platform for political speech and expression.  As the world went into pandemic lockdown, Hong Kong democracy activists took to Nintendo’s hit Animal Crossing to keep their pro-democracy protest going online (and Animal Crossing was banned in China shortly after). Just as many Black Lives Matter protests took to the streets, other protesters voiced their support in-game.  Earlier this month, the Biden campaign introduced Animal Crossing yard signs which other players can download and place in front of their in-game home. EFF is part of this too—you can show your support for EFF with in-game hoodies and hats. 

Nevertheless, Nintendo seems uncomfortable with political speech on its platform. The Japanese Terms of Use prohibit in-game “political advocacy” (政治的な主張 or seijitekina shuchou), which led to a candidate for Japan’s Prime Minister canceling an in-game campaign event. But it has not expanded this blanket ban to the Terms for Nintendo of America or Nintendo of Europe.

Nintendo has the right to host the platform as it sees fit. But just because they can do this, doesn’t mean they should. Nintendo needs to also recognize that it has provided a platform for political and social expression, and allow people to use words that are part of important conversations about our world, whether about the pandemic, protests against police violence, or democracy in Hong Kong.

Kurt Opsahl

Trump’s Ban on TikTok Violates First Amendment by Eliminating Unique Platform for Political Speech, Activism of Millions of Users, EFF Tells Court

2 months 1 week ago

We filed a friend-of-the-court brief—primarily written by the First Amendment Clinic at the Sandra Day O’Connor College of Law—in support of a TikTok employee who is challenging President Donald Trump’s ban on TikTok and was seeking a temporary restraining order (TRO). The employee contends that Trump's executive order infringes the Fifth Amendment rights of TikTok's U.S.-based employees. Our brief, which is joined by two prominent TikTok users, urges the court to consider the First Amendment rights of millions of TikTok users when it evaluates the plaintiff’s claims.

Notwithstanding its simple premise, TikTok has grown to have an important influence in American political discourse and organizing. Unlike other platforms, users on TikTok do not need to “follow” other users to see what they post. TikTok thus uniquely allows its users to reach wide and diverse audiences. That’s why the two TikTok users who joined our brief use the platform. Lillith Ashworth, whose critiques of Democratic presidential candidates went viral last year, uses TikTok to talk about U.S. politics and geopolitics. The other user, Jynx, maintains an 18+ adult-only account, where they post content that centers on radical leftist liberation, feminism, and decolonial politics, as well as the labor rights of exotic dancers.

Our brief argues that in evaluating the plaintiff’s claims, the court must consider the ban’s First Amendment implications. The Supreme Court has established that rights set forth in the Bill of Rights work together; as a result the plaintiff's Fifth Amendment claims are enhanced by the First Amendment considerations. We say in our brief:

A ban on TikTok violates fundamental First Amendment principles by eliminating a specific type of speaking, the unique expression of a TikTok user communicating with others through that platform, without sufficient considerations for the users’ speech. Even though the order facially targets the platform, its censorial effects are felt most directly by the users, and thus their First Amendment rights must be considered in analyzing its legality.

EFF, the First Amendment Clinic, and the individual amici urge the court to adopt a higher standard of scrutiny when reviewing the plaintiff’s claims against the president. Not only are the plaintiff’s Fifth Amendment liberties at stake, but millions of TikTok users have First Amendment freedoms at stake. The Fifth Amendment and the First Amendment are each critical in securing life, liberty, and due process of law. When these amendments are examined separately, they each deserve careful analysis; but when the interests protected by these amendments come together, a court should apply an even higher standard of scrutiny.

The hearing on the TRO scheduled for tomorrow was canceled after the government promised the court that it did not intend to include the payment of wages and salaries within the executive order's definition of prohibited transactions, thus addressing the plaintiff's most urgent claims.





Nathaniel Sobel

Things to Know Before Your Neighborhood Installs an Automated License Plate Reader

2 months 2 weeks ago

Every week EFF receives emails from members of homeowner’s associations wondering if their Homeowner’s Association (HOA) or Neighborhood Association is making a smart choice by installing automated license plate readers (ALPRs). Local groups often turn to license plate readers thinking that they will protect their community from crime. But the truth is, these cameras—which record every license plate coming in and out of the neighborhood—may create more problems than they solve. 

The False Promise of ALPRs

Some members of a community think that, whether they’ve experienced crime in their neighborhood or not, a neighborhood needs increased surveillance in order to be safe. This is part of a larger nationwide trend that shows that people’s fear of crime is incredibly high and getting higher, despite the fact that crime rates in the United States are low by historical standards. 

People imagine that if a crime is committed, an association member can hand over to police the license plate numbers of everyone that drove past a camera around the time the crime is believed to have been committed. But this will lead to innocent people becoming suspects because they happened to drive through a specific neighborhood. For some communities, this might mean hundreds of cars end up under suspicion. 

Also, despite what ALPR vendors like Flock Safety and Vigilant Solutions claim, there is no real evidence that ALPRs reduce crime. ALPR vendors, like other surveillance salespeople, operate on the assumption that surveillance will reduce  crime by either making would-be criminals aware of the surveillance in hopes it will be a deterrent, or by using the technology to secure convictions of people that have allegedly committed crimes in the neighborhood. However, there is little empirical evidence that such surveillance reduces crime. 

Like all machines, ALPRs make mistakes

ALPRs do, however, present a host of other potential problems for people who live, work, or commute in a surveilled area. 

The Danger ALPRs Present To Your Neighborhood

ALPRs are billed as neighborhood watch tools that allow a community to record which cars enter and leave, and when. They essentially turn any neighborhood into a gated community by casting suspicion on everyone who comes and goes. And some of these ALPR systems (including Flock’s) can be programmed to allow all neighbors to have access to the records of vehicle comings and goings. But driving through a neighborhood should not lead to suspicion. There are thousands of reasons why a person might be passing through a community, but ALPRs allow anyone in the neighborhood to decide who belongs and who doesn’t. Whatever motivates that individual - racial biases, frustration with another neighbor, even disagreements among family members - could all be used in conjunction with ALPR records to implicate someone in a crime, or in any variety of other legal-but-uncomfortable situations. 

The fact that your car passes a certain stop sign at a particular time of day may not seem like invasive information. But you can actually tell a lot of personal information about a person by learning their daily routines—and when they deviate from those routines. If a person’s car stops leaving in the morning, a nosy neighbor at the neighborhood association could infer that they may have lost their job. If a married couple’s cars are never at the house at the same time, neighbors could infer relationship discord. These ALPR cameras also give law enforcement the ability to learn the comings and goings of every car, effectively making it impossible for drivers to protect their privacy. 

These dangers are only made worse by the broad dissemination of this sensitive information. It goes not just to neighbors, but also to Flock employees, and even your local police. It might also go to hundreds of other police departments around the country through Flock’s new and aptly-named TALON program, which links ALPRs around the country. 

ALPR Devices Lack Oversight

HOAs and Neighborhood Associations are rarely equipped or trained to make responsible decisions when it comes to invasive surveillance technology. After all, these people are not bound by the oversight that sometimes accompanies government use of technology--they’re your neighbors. While police are subject to legally-binding privacy rules (like the Fourth Amendment), HOA members are not. Neighbors could, for instance, use ALPRs to see when a neighbor comes home from work every day. They could see if a house has a regular visitor and what time that person arrives and leaves. In San Antonio, one HOA member was asked what they could do to prevent someone with access to the technology from obsessively following the movements of specific neighbors. He had never considered that possibility: "Asked whether board members had established rules to keep track of who searches for what and how often, Cronenberger said it hadn’t dawned on her that someone might use the system to track her neighbors’ movements.” 

Machine Error Endangers Black Lives

Like all machines, ALPRs make mistakes. And these mistakes can endanger people’s lives and physical safety. For example, an ALPR might erroneously conclude that a passing car’s license plate matches the plate of a car on a hotlist of stolen cars. This can lead police to stop the car and detain the motorists. As we know, these encounters can turn violent or even deadly, especially if those cars misidentified are being driven by Black motorists. 

This isn’t a hypothetical scenario. Just last month, a false alert from an ALPR led police to stop a Black family, point guns at them, and force them to lie on their bellies in a parking lot—including their children, aged six and eight. Tragically, this is not the first time that police have aimed a gun at a Black motorist because of a false ALPR hit.

Automated License Plate Reader Abuses by Police Foreshadow Abuses by Neighborhoods 

Though police have used these tools for decades, communities have only recently had the ability to install their own ALPR systems. In that time, EFF and many others have criticized both ALPR vendors and law enforcement for their egregious abuses of the data collected. 

Police abuse this technology regularly. And unfortunately, neighborhood users will likely do the same. 

A February 2020 California State Auditor’s report on four jurisdictions’ use of this tech raised several significant concerns. The data collected is primarily not related to individuals suspected of crimes. Many agencies did not implement privacy-protective oversight measures, despite laws requiring it. Several agencies did not have documented usage or retention policies. Many agencies lack guarantees that the stored data is appropriately secure. Several agencies did not adequately confirm that entities they shared data with had a right to receive that information. And many did not have appropriate safeguards for users accessing the data. 

California agencies aren’t unique: a state audit in Vermont found that 11% of ALPR searches violated state restrictions on when cops can and can't look at the data. Simply put: police abuse this technology regularly. And unfortunately, neighborhood users will likely do the same. 

In fact, the growing ease with which this data can be shared is only increasing. Vigilant Solutions, a popular vendor for police ALPR tech, shares this data between thousands of departments via its LEARN database. Flock, a vendor that aims to offer this technology to neighborhoods, has just announced a new nationwide partnership that allows communities to share footage and data with law enforcement anywhere in the country, vastly expanding its reach. While Flock does include several safeguards that Vigilant Solutions does not, such as encrypted video and 30-day deletion policies, many potential abuses remain.

Additionally, some ALPR systems can automatically flag cars that don’t look a certain way—from rusted vehicles to cars with dents or poor paint jobs—endangering anyone who might not feel the need (or have the income required) to keep their car in perfect shape. These “vehicle fingerprints” might flag, not just a particular license plate, but “a blue Honda CRV with damage on the passenger side door and a GA license plate from Fulton County.” Rather than monitoring specific vehicles that come in and out of a neighborhood via their license plate, “vehicle fingerprint” features could create a trouble drag-net style of monitoring. Just because a person is driving a damaged car from an accident, or a long winter has left a person’s car rusty, does not mean they are worthy of suspicion or undue police or community harassment.  

Some ALPRs are even designed to search for certain bumper stickers, which could reveal information on the political or social views of the driver. While they aren’t in every ALPR system, and some are just planned, all of these features taken together increase the potential for abuse far beyond the dangers of collecting license plate numbers alone. 

What You Can Tell Your Neighbors if You’re Concerned 

Unfortunately, ALPR devices are not the first piece of technology to exploit irrational fear of crime in order to expand police surveillance and spy on neighbors and passersby. Amazon’s surveillance doorbell Ring currently has over 1,300 partnerships with individual police departments, which allow departments to directly request footage from an individual’s personal surveillance camera without presenting a warrant. ALPRs are at least as dangerous: they track our comings and goings; the data can indicate common travel patterns (or unique ones); and because license plates are required by law, there is no obvious way to protect yourself.

If your neighborhood is considering this technology, you have options. Remind your neighbors that it collects data on anyone, regardless of suspicion. They may think that only people with something to hide need to worry—but hide what? And from who? You may not want your neighbor knowing what time you leave your neighborhood in the morning and get back at night. You may also not want the police to know who visits your home and for how long. While the intention is to protect the neighborhood from crime, introducing this kind of surveillance may also end up incriminating your neighbors and friends for reasons you know nothing about. 

You can also point out that ALPRs have not been shown to reduce crime. Likewise, consider sending around the California State Auditor’s report on abuses by law enforcement. And if the technology is installed, you can (and should) limit the amount of data that’s shared with police, automatically or manually. Remind people of the type of information ALPRs collect and what your neighbors can infer about your private life. 

If you drive a car, you’re likely being tracked by ALPRs, at least sometimes. But that doesn’t mean your neighborhood should contribute to the surveillance state. Everyone ought to have a right to pass through a community without being tracked, and without accidentally revealing personal details about how they spend their day. Automatic license plate readers installed in neighborhoods are a step in the wrong direction. 

Jason Kelley

Researchers Targeting AI Bias, Sex Worker Advocate, and Global Internet Freedom Community Honored at EFF’s Pioneer Award Ceremony

2 months 2 weeks ago
Virtual Ceremony October 15 to Honor Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji; Danielle Blunt; and the Open Technology Fund (OTF) Community

San Francisco – The Electronic Frontier Foundation (EFF) is honored to announce the 2020 Barlow recipients at its Pioneer Award Ceremony: artificial intelligence and racial bias experts Joy Buolamwini, Dr. Timnit Gebru, and Deborah Raji; sex worker activist and tech policy and content moderation researcher Danielle Blunt; and the global Internet freedom organization Open Technology Fund (OTF) and its community.

The virtual ceremony will be held October 15 from 5:30 pm to 7 pm PT. The keynote speaker for this year’s ceremony will be Cyrus Farivar, a longtime technology investigative reporter, author, and radio producer. The event will stream live and free on Twitch, YouTube, Facebook, and Twitter, and audience members are encouraged to give a $10 suggested donation. EFF is supported by small donors around the world and you can become an official member at https://eff.org/PAC-join.

Joy Buolamwini, Dr. Timit Gebru, and Deborah Raji’s trailblazing academic research on race and gender bias in facial analysis technology laid the groundwork for a national movement—and a growing number of legislative victories—aimed at banning law enforcement’s use of flawed and overbroad face surveillance in American cities. The trio collaborated on the Gender Shades series of papers based on Buolamwini’s MIT thesis, revealing alarming bias in AI services from companies like Microsoft, IBM, and Amazon. Their subsequent internal and external advocacy spans Stanford, University of Toronto, Black in AI, Project Include, and the Algorithmic Justice League. Buolamwini, Gebru, and Raji are bringing light to the profound impact of face recognition technologies on communities of color, personal privacy and free expression, and the fundamental freedom to go about our lives without having our movements and associations covertly monitored and analyzed.

Danielle Blunt is one of the co-founders of Hacking//Hustling, a collective of sex workers and accomplices working at the intersection of tech and social justice to interrupt state surveillance and violence facilitated by technology. A professional NYC-based Femdom and Dominatrix, Blunt researches sex work and equitable access to technology from a public health perspective. She is one of the lead researchers of Hacking//Hustling's “Erased: The Impact of FOSTA-SESTA and the Removal of Backpage” and “Posting to the Void: CDA 230, Censorship, and Content Moderation,” studying the impact of content moderation on the movement work of sex workers and activists. She is also leading organizing efforts around sex worker opposition to the EARN IT Act, which threatens access to encrypted communications, a tool that many in the sex industry rely on for harm reduction, and would also increase platform policing of sex workers and queer and trans youth. Blunt is on the advisory board of Berkman Klein's Initiative for a Representative First Amendment (IfRFA) and the Surveillance Technology Oversight Project in NYC. She enjoys redistributing money from institutions, watching her community thrive, and “making men cry.”

The Open Technology Fund (OTF) has fostered a global community and provided support—both monetary and in-kind—to more than 400 projects that seek to combat censorship and repressive surveillance. The OTF community has helped more than two billion people in over 60 countries access the open Internet more safely and advocate for democracy. OTF earned trust and built community through its open source ethos, transparency, and a commitment to independence from its funder, the U.S. Agency for Global Media (USAGM), and helped fund several technical projects at EFF. However, President Trump recently installed a new CEO for USAGM, who immediately sought to replace OTF's leadership and board and to freeze the organization's funds—threatening to leave many well-established global freedom tools, their users, and their developers in the lurch. Since then, OTF has made some progress in regaining control, but it remains at risk and, as of this writing, USAGM is still withholding critical funding. With this award, EFF is honoring the entire OTF community for their hard work and dedication to global Internet freedom and recognizing the need to protect this community and ensure its survival despite the current political attacks.

“One of EFF’s guiding principles is that technology should enhance our rights and freedoms instead of undermining them,” said EFF Executive Director Cindy Cohn. “All our honorees this year are on the front lines if this important work—striving to ensure that no matter where you are from, what you look like, or what you do for a living, the technology you rely on makes your life better and not worse. While most technology is here to stay, a technological dystopia is not inevitable. Used thoughtfully, and supported by the right laws and policies, technology can and will make the world better. We are so proud that all of our honorees are joining us to fight for this together.”

Awarded every year since 1992, EFF’s Pioneer Award Ceremony recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Malkia Cyril, William Gibson, danah boyd, Aaron Swartz, and Chelsea Manning. Sponsors of the 2020 Pioneer Award ceremony include Dropbox; No Starch Press; Ridder, Costa, and Johnstone LLP; and Ron Reed.

To attend the virtual Pioneer Awards ceremony:

For more on the Pioneer Award ceremony:

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org
Rebecca Jeschke

EFF to EU Commission on Article 17: Prioritize Users’ Rights, Let Go of Filters

2 months 2 weeks ago

During the Article 17 (formerly #Article13) discussions about the availability of copyright-protected works online, we fought hand-in-hand with European civil society to avoid all communications being subjected to interception and arbitrary censorship by automated upload filters. However, by turning tech companies and online services operators into copyright police, the final version of the EU Copyright Directive failed to live up to the expectations of millions of affected users who fought for an Internet in which their speech is not automatically scanned, filtered, weighed, and measured.

Our Watch Has Not Ended

EU "Directives" are not automatically applicable. EU member states must “transpose” the directives into national law. The Copyright Directive includes some safeguards to prevent the restriction of fundamental free expression rights, ultimately requiring national governments to balance the rights of users and copyright holders alike. At the EU level, the Commission has launched a Stakeholder Dialogue to support the drafting of guidelines for the application of Article 17, which must be implemented in national laws by June 7, 2021. EFF and other digital rights organizations have a seat at the table, alongside rightsholders from the music and film industries and representatives of big tech companies like Google and Facebook.

During the stakeholder meetings, we made a strong case for preserving users’ rights to free speech, making suggestions for averting a race among service providers to over-block user content. We also asked the EU Commission to share the draft guidelines with rights organizations and the public, and allow both to comment on and suggest improvements to ensure that they comply with European Union civil and human rights requirements.

The Targeted Consultation: Don’t Experiment With User Rights

The Commission has partly complied with EFF and its partners’ request for transparency and participation. The Commission launched a targeted consultation addressed to members of the EU Stakeholder Group on Article 17. Our response focuses on mitigating the dangerous consequences of the Article 17 experiment by focusing on user rights, specifically free speech, and by limiting the use of automated filtering, which is notoriously inaccurate.

Our main recommendations are:

  • Produce a non-exhaustive list of service providers that are excluded from the obligations under the Directive. Service providers not listed might not fall under the Directive’s rules, and would have to be evaluated on a case-by-case basis;
  • Ensure that the platforms’ obligation to show best efforts to obtain rightsholders’ authorization and ensure infringing content is not available is a mere due diligence duty and must be interpreted in light of the principles of proportionality and user rights exceptions;
  • Recommend that Member States not mandate the use of technology or impose any specific technological solutions on service providers in order to demonstrate “best efforts”;
  • Establish a requirement to avoid general user (content) monitoring. Spell out that the implementation of Art 17 should never lead to the adoption of upload filters and hence general monitoring of user content;
  • State that the mere fact that content recognition technology is used by some companies does not mean that it must be used to comply with Art 17. Quite the opposite is true: automated technologies to detect and remove content based on rightsholders’ information may not be in line with the balance sought by Article 17.
  • Safeguard the diversity of platforms and not put disproportionate burden on smaller companies, which play an important role in the EU tech ecosystem;
  • Establish that content recognition technology cannot assess whether the uploaded content is infringing or covered by a legitimate use. Filter technology may serve as assistants, but can never replace a (legal) review by a qualified human;
  • Filter-technology can also not assess whether user content is likely infringing copyright;
  • If you believe that filters work, prove it. The Guidance should contain a recommendation to create and maintain test suites if member states decide to establish copyright filters. These suites should evaluate the filters' ability to correctly identify both infringing materials and non-infringing uses. Filters should not be approved for use unless they can meet this challenge;
  • Complaint and redress procedures are not enough. Fundamental rights must be protected from the start and not only after content has been taken down;
  • The Guidance should address the very problematic relationship between the use of automated filter technologies and privacy rights, in particular the right not to be subject to a decision based solely on automated processing under the GDPR.
Christoph Schmon

Spain’s New Who Defends Your Data Report Shows Robust Privacy Policies But Crucial Gaps to Fill

2 months 2 weeks ago

ETICAS Foundation’s second ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on data privacy practices in Spain shows how Spain’s leading Internet and mobile app providers are making progress in being clear about how users' personal data is being protected. Providers are disclosing what information is being collected, how long it’s being kept, and who it’s shared with. Compared to Eticas' first report on Spain in 2018, there was significant improvement in the number of companies informing users about how long they store data as well as notifying users about privacy policy changes.

The report evaluating policies at 13 Spanish Internet companies also indicates that a handful are taking seriously their obligations under the new General Data Protection Regulation (GDPR), the European Union’s data privacy law that sets tough standards for protecting customers’ private information and gives users more information about and control over their private data. The law went into effect in December 2018.

But the good news for most of the companies pretty much stops there. All but the largest Internet providers in Spain are seriously lagging when it comes to transparency around government demands for user data, according to the Eticas report released today.

While Orange commits to notify users about government requests and both Vodafone and Telefónica clearly state the need for a court order before handing users’ communications to authorities, other featured companies have much to improve. They are failing to provide information about how they handle law enforcement requests for user data, whether they require judicial authorization before giving personal information to police, or if they notify users as soon as legally possible that their data was released to law enforcement. The lack of disclosure about their practices leaves an open question about whether they have users’ backs when the government wants personal data.

The format of the Eticas report is based on EFF’s Who Has Your Back project, which was launched nine years ago to shine a light on how well U.S. companies protect user data, especially when the government wants it. Since then the project has expanded internationally, with leading digital rights groups in Europe and the Americas evaluating data privacy practices of Internet companies so that users can make informed choices about to whom they should trust their data. Eticas Foundation first evaluated Spain’s leading providers in 2018 as part of a region-wide initiative focusing on Internet privacy policies and practices in Iberoamerica. 

In today’s report, Eticas evaluated 13 companies, including six telecom providers (Orange, Ono-Vodafone, Telefónica-Movistar, MásMóvil, Euskatel, and Somos Conexión), five home sales and rental apps (Fotocasa, Idealista, Habitaclia, Pisos.com, and YaEncontré), and two apps for selling second hand goods (Vibbo and Wallapop). The companies were assessed against a set of criteria covering policies for data collection, handing data over to law enforcement agencies, notifying customers about government data requests, publishing transparency reports, and promoting user privacy. Companies were awarded stars based on their practices and conduct. In light of the adoption of the GDPR, this year’s report assessed companies against several new criteria, including providing information on how to contact a company data protection officer, using private data to automate decision making without human involvement and build user profiles, and practices regarding international data transfers. Etica also looked at whether they provide guidelines, tailored to local law, for law enforcement seeking user data.

The full study is available in Spanish, and we outline the main findings below. 

An Overview of Companies' Commitments and Shortcomings

Telefonica-Movistar, Spain’s largest mobile phone company, was the most highly rated, earning stars in 10 out of 13 categories. Vodafone was a close second, with nine stars. There was a big improvement overall in companies providing information about how long they keep user data—all 13 companies reported doing so this year, compared to only three companies earning partial credit in 2018. The implementation of the GDPR has had a positive effect on privacy policies at only some companies, the report shows. While most companies are providing contact information for data protection officials, only four—Movistar, Fotocasa, Habitaclia, and Vibbo—provide information about their practices for using data-based, nonhuman decision making, and profiling, and six—Vodafone, MásMóvil, Pisos.com, Idealista, Yaencontré, and Wallapop—provide information only about profiling. 

Only Telefónica-Movistar and Vodafone disclose information to users about its policies for giving personal data to law enforcement agencies. Telefonica-Movistar is vague in its data protection policy, only stating that it will hand user data to police in accordance with the law. However, the company’s transparency report shows that it lets police intercept communications only with a court order or in emergency situations. For metadata, the information provided is generic: it only mentions the legal framework and the authorities entitled to request it (judges, prosecutors, and the police).

Vodafone’s privacy policy says data will be handed over “according to the law and according to an exhaustive assessment of all legal requirements”. While its data protection policy does not provide information in a clear way, there’s an applicable legal framework report that describes both the framework and how the company interprets it, and states that a court order is needed to provide content and metadata to law enforcement.

Orange Spain is the only company that says it’s committed to telling users when their data is released to law enforcement unless there’s a legal prohibition against it. Because the company didn’t make clear it will do so as soon as there's no legal barrier, it received partial credit. Euskatel and Somos Conexión, smaller ISPs, have stood out in promoting user privacy through campaigns or defending users in courts. On the latter, Euskatel has challenged a judicial order demanding the company reveal IP addresses in a commercial claim. After finally handing them over once the sentence was confirmed by a higher court, Euskatel filed a complaint with the Spanish data protection authority for possible violation of purpose limitation safeguards considering how the claimant used the data.

The report shows that, in general, the five home apps (Fotocasa, Idealista, Habitaclia, Pisos.com, and YaEncontré) and two second-hand goods sales apps (Vibbo and Wallapop) have to step up their privacy information game considerably. They received no stars in fully nine out of the 13 categories evaluated. This should give users pause and, in turn, motivate these companies to increase transparency about their data privacy practices so that the next time they are asked if they protect customers’ personal data, they have more to show.

Through ¿Quien Defiende Tus Datos? reports, local organizations in collaboration with EFF have been comparing companies' commitments to transparency and users' privacy in different Latin American countries and Spain. Earlier this year, Fundación Karisma in Colombia, ADC in Argentina, and TEDIC in Paraguay published new reports. New editions in Panamá, Peru, and Brazil are also on their way to spot which companies stand with their users and those that fall short of doing so. 

Karen Gullo

Workplace Surveillance in Times of Corona

2 months 2 weeks ago

With numbers of COVID-19 infections soaring again in the United States and around the world, we have to learn how to manage its long-term ramifications for our economies. As people adjust to minimizing the risk of infections in everyday settings, one critical context is work. Even though millions have shifted to working from home during the past months, remote work is not possible for every industry. While the pandemic has had a critical disruptive effect on work and employment virtually everywhere in the world, it has not affected everyone in the same ways. The International Labor Organization notes that the current crisis significantly affects women, workers in precarious situations who lack access to health care or limited social security benefits, and informal workers, who work jobs that are not taxed or registered by the government. In Latin America, 60% of workers are considered informal, with 58% of informal workers living in economic vulnerability on 13 U.S. dollars or less per day or in poverty on less than 5.5 U.S. dollars per day. Many have no choice but to work outside the home. This can involve putting their health and livelihoods on the line, especially in countries with insufficient public health care or unemployment programs.

As businesses strive to re-open, and many workers depend on them doing so, many employers are looking at experimental technologies to navigate the risk of infections among their workforce. Over the past months, dozens of new apps, wearables, and other technologies have sought to help mitigate the risks of COVID at work, not counting the many examples of pre-existing workplace technologies already in use for different purposes. Some technologies seek to trace the proximity of one person with another to estimate whether they are less than approximately six feet (or two meters) apart for a sufficient time. This data can be used to notify workers of potential exposures to COVID. Decentralized Bluetooth proximity is the most promising approach for technology-assisted exposure notification that minimizes privacy risks. But while some employers aim for that goal, others are using apps that track workers’ individualized phone location data with GPS. GPS is extremely sensitive, especially when it collects worker movements outside the workplace, and is insufficiently granular to identify when two co-workers were close enough together to transmit the virus. 

Other companies ask employees to submit daily symptom checks to their employers. Some checks may be as simple as one or two yes/no questions, while others collect more granular symptom data. The more information a company collects, the greater the risk that it can be used to detect conditions, or side effects of treatments, that have nothing to do with COVID-19. This is an issue because many companies are not subject to the privacy protections in the 1996 Federal Health Insurance Portability and Accountability Act (“HIPAA”). HIPAA has a very limited scope because protections for health information in the U.S. rely on who has the data. In general, only data created or maintained by health plans, health care clearinghouses, health care providers that conduct certain health care transactions electronically, and their business associates have HIPAA protections. Data collected by any other entity, such as an employer, usually do not. Under the EU General Data Protection Regulation (GDPR), an employee's personal data concerning their health includes all data about their “health status (...) which reveals information relating to the past, current or future physical or mental health status”. Peoples’ rights flow with their data. The European Union has always treated such personal data as sensitive with stringent limitations. In short, many of them seriously undermine employees’ privacy and other fundamental rights and collect information in a way that gives employees little protection.

Health Surveys and Contact Tracing Apps

One common category of technology to mitigate COVID-19 at work are apps that prompt workers to report information about their health status. One is ProtectWell, developed by Microsoft in cooperation with United Health, a for-profit health care company located in Minnesota. Urging its prospective users not to keep “life on hold,” ProtectWell allows organizations to build custom health surveys. It also offers Microsoft’s healthcare bot to help triage which symptoms are most concerning. When users are considered to be at risk, employers can direct them to undergo a testing process that will report the results directly back to the employer. ProtectWell’s privacy policy clarifies, as it should, that any information disclosed to the app is not considered health information as defined in HIPAA, and hence not protected as such. The privacy policy further allows United Health to share test results and responses to symptom surveys with a user’s employer, without requiring the worker’s consent. While both Microsoft and United Health plan on deploying the app for their workforces, it is not clear whether their employees have a choice in that, or how widely other organizations have taken up the app. But ProtectWell evokes many of the privacy concerns related to workplace wellness programs. Many workplaces offer wellness programs meant to incentivize employees to participate in health screenings or fitness programs. Workers often face the difficult choice of forgoing certain benefits by not participating, or giving their employers access to potentially sensitive health data which can be abused in a myriad of ways.  

Another example is Check-in, a suite of products developed and marketed by Price Waterhouse Cooper (PwC). Noting that “83% of companies do not have processes and systems in place to track all of their workforces,” PwC offers customers an app that combines location tracking using GPS with a tool that monitors employees’ productivity. Once downloaded, the app activates WiFi and Bluetooth capabilities to keep track of which workers have been in close contact, and uses the phones’ GPS signals to determine when they are at the company’s premises. As the company itself notes, “app-based contact tracing can result in processing more data than is needed for the intended purpose of notifying affected individuals.” GPS data, in particular, can expose where a worker has been and what they have been doing, both inside and outside the office. 

PwC does not provide detailed information about the location tracking capabilities of the app. A spokesman explains that the data collected is made available to managers to help trace workers who might have been in proximity to a COVID patient. PwC has not shown that employees’ consent to such use before the app shares their health data with their employers. Even if the policy requires this, such consent can be questionable, since workers—with their livelihoods at stake—may not exercise real choice when their employer tells them to strap it on or release personal data. In the European Union, under the GDPR, consent can't be a valid legal ground to process the data when the employee feels compelled to consent or endure negative consequences if they do not consent. Employers may find another legal basis to process employee’s health data, such as legitimate interest.  However, legitimate interest may not be possible to use if the employer’s interests “are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data.” Such an assessment might need to be done case-by-case.

Beyond its location and proximity tracking features, the app also includes a feature named “status connect” which allows employers to check in with their workers to understand factors that may inhibit their productivity on a given day. Designed to “spot productivity blockers for remote workers,” Status Connect offers employers access to a trove of sensitive information regarding their employees’ health, location (remote or on-site), and productivity. According to reports, PwC is currently testing the suite. 

Blackline Safety, a Canadian company, has found another approach to location tracking by combining its “intrinsically safe” G7C wearable device (designed to detect gas leakages) and a smartphone app to supervise ‘lone’ workers. Blackline is thus an example of companies repurposing existing technology for COVID purposes, with questionable results. Blackline’s G7C wearable uses GPS tracking to locate its wearer. When using the product, “employee location data streams to the Blackline Safety Cloud,” allowing companies to “immediately retrace [an] individual's steps” in order to see whom they may have been in contact with. This tool may be appropriate for some non-COVID purposes, such as promoting safety for employees who work in remote or hazardous environments. Still, GPS data is too sensitive, too prone to abuse, and not effective enough to serve as the basis for COVID exposure notification.

Enforcing Social Distancing 

Besides mobile apps, employers can also deploy hardware in their quest to control COVID infections in their organization. Several companies have developed machine vision software designed to augment existing camera systems to monitor people's compliance with social distancing rules. Smartvid.io, a company prominent in the construction industry, claims that its technology can help organizations identify and log the numbers of people not adhering to social distancing or not wearing protective masks. The software automatically generates reports to help managers “reward COVID-19 safety practices.” It is unclear whether and how Smartvid.io has access to the data generated by cameras equipped with its software, which could be used to collect detailed logs of workers’ locations, productivity levels, and even with whom they socialize at work. 

Based in Pune, India, Glimpse Analytics is another company that claims to help employers implement health guidelines. Like Smartvid, its One Glimpse Edge’ device connects with pre-existing CCTV cameras and triggers alerts when rooms reach maximum occupancy, or when individuals appear to be too close or fail to wear masks. The software also ‘tracks’ housekeeping staff tasked with cleaning workspaces. While Glimpse Analytics maintains that its software ensures people’s privacy, as it does not recognize faces, and that all data is encrypted and processed locally, it enables sweeping workplace surveillance. It amasses large volumes of sensitive data, without requiring workers’ consent.

Likewise, Amazon recently introduced its ‘Distance Assistant.’ The software, which was made open source, aims to monitor workers’ distance to implement social distancing guidelines. Hooked up to cameras, sensors, and a TV screen, the assistant is meant to give instant visual feedback when workers are too close to each other. Amazon has deployed the software, which businesses and individuals can access free of charge, across several of its buildings. Besides the question of just how useful this piece of technology can be in keeping workers safe, it is unclear how the captured data is stored, used, and shared, and what steps Amazon is taking to maintain workers’ privacy. Data about workers’ movement patterns could likely be abused to provide managers with information about which employees associate with each other. 


Purveyors of a variety of new and repurposed surveillance technologies seek to help employers mitigate the risks of workplace COVID infections. But many of these technologies pose severe threats to workers’ privacy and other fundamental rights. In particular, a technology that creates graphs of interactions between co-workers could stifle workers’ freedom to associate, even safely, and enable turnkey union-busting. Furthermore, many of these tools are untested and unproven, and may not be as effective as employers hope. While employers must do what they can to keep their workers safe, such efforts should not come at the price of undermining workers’ privacy. 

Katitza Rodriguez

EFF Tells California Supreme Court Not to Require ExamSoft for Bar Exam

2 months 2 weeks ago

This week, EFF sent a letter (pdf link) to the Supreme Court of California objecting to the required use of the proctoring tool ExamSoft for the October 2020 California Bar Exam. Test takers should not be forced to give their biometric data to ExamSoft, the letter says, which can use it for marketing purposes, share it with third parties, or hand it over to law enforcement, without the ability to opt out and delete this information. This remote proctoring solution forces Bar applicants to surrender the privacy and security of their personal biometric information, violating the California Consumer Privacy Act. EFF asked the California Bar to devise an alternative option for the five-thousand or so expected test takers next month. 

ExamSoft is a popular proctoring or assessment software product that purports to allow remote testing while determining whether a student is cheating. To do so, it uses various privacy-invasive technical monitoring techniques, such as, comparing test takers’ images using facial recognition, tracking eye movement, recording patterns of keystrokes, and recording video and audio of students’ surroundings as they take the test. The type of data ExamSoft collects includes “facial recognition and biometric data of each individual test taker for an extended period of time, including a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry”. Additionally, ExamSoft has access to the device’s webcam, including audio and video access, and screen, for the duration of the exam. 

ExamSoft’s collection of test takers’ biometric and other personal data implicates the California Consumer Privacy Act. At a minimum, the letter states, the State Bar of California must provide a mechanism for students to opt out of the sale of their data, and to delete it, to comply with this law: 

The California Bar should clearly inform test takers of their protections under the CCPA. Before test takers are asked to use such an invasive piece of software, the California Bar should confirm that, at an absolute minimum, it has in place a mechanism to allow test takers to access their ExamSoft data, to opt out of the “sale” of their data, and to request its deletion. Students should have all of these rights without facing threat of punishment. It is bad enough that the use of ExamSoft puts the state in the regrettable position of coercing students into compromising their privacy and security in exchange for their sole chance to take the Bar Exam. It should not compound that by denying them their rights under state privacy law.

In addition to these privacy invasions, proctoring software brings with it many potential other dangers, including threats to security: vast troves of personal data have already leaked from one proctoring company, ProctorU, affecting 440,000 users. The ACLU has also expressed concerns with the software’s use of facial recognition, which will “exacerbate racial and socioeconomic inequities in the legal profession and beyond.” And lastly, this type of software has been shown to have technical issues that could cause students to experience unexpected problems while taking the Bar Exam, and comes with requirements that could harm users who cannot meet them, such as requiring a laptop that is relatively new, and broadband speed that many households do not have. Other states have canceled the use of proctoring software for their bar exams due to the inability to ensure a “secure and reliable” experience. California should take this into account when considering its use of proctoring software.

The entrance fee for becoming a lawyer in California should not include compromising personal privacy and security. The Bar Exam is already a nerve-wracking, anxiety-inducing test. We ask the Supreme Court of California to take seriously the risks presented by ExamSoft and pursue alternatives that do not put exam takers in jeopardy.

Jason Kelley

California Still Needs Privacy Protections for COVID Tracking Apps

2 months 2 weeks ago

Many states have launched their own versions of exposure notification or tracking apps as a part of their response to the ongoing COVID-19 pandemic. California may be poised to join them. Yet the Golden State still has not enacted any privacy standards for state COVID tracking apps, or for contracts the state may enter to deploy such programs.

This week, Colorado announced a program using the Exposure Notifications Express (ESE) system. This system, newly baked into Apple’s iOS operating system, will soon also be an option on Google’s Android operating system. It allows tech users to opt-in to a public health program, which alerts them if they’ve been exposed, without requiring them to download a separate app. It is likely to become the easiest path for most smartphone users to participate in exposure notification systems.

While California has not officially announced any such program, there are strong hints that one is in the works. On August 28, three leaders in the California legislature — Assembly Privacy and Consumer Protection Chair Ed Chau, Senate Judiciary Chair Hannah-Beth Jackson, and Assembly Speaker Anthony Rendon—wrote to Governor Gavin Newsom referencing discussions for a pilot program in California that includes a “contact-tracing application.”

Worryingly, they also articulated concerns about the lack of privacy considerations that have accompanied those plans, saying that “the Administration has not fully considered many important implications of implementing” a statewide app. “We must work together every step of the way to ensure that any action taken by the Administration to deploy a contact-tracing application provide our constituents with the data privacy and security assurances necessary to encourage widespread participation,” the letter said.

Privacy protections are necessary to public health programs, particularly when a program needs high levels of participation to be effective. People will not use applications they can’t trust. That’s why EFF and other privacy groups have called on Governor Newsom to place basic privacy guardrails on any contact-tracing program run by or with the state. These include:

  • A data minimization rule that ensures that the information a public or private entity collects only serves a public health purpose.
  • A guarantee that any private entity working on a program does not use the information for any other purpose—including, but not limited, to commercial purposes.
  • A prohibition from discriminating against people based on their participation—or nonparticipation—in these programs, to protect those who cannot or do not want to participate in a data collection program, and to avoid programs with compulsory participation.
  • A strong requirement to purge data from such programs when it is no longer useful—we are asking for a 30-day retention period. We would not, however, object to a narrowly-crafted exception from this data purge rule for a limited amount of aggregated and de-identified demographic data for the sole purpose of tracking inequities in public health response to the crisis.

We supported two bills in the 2019-2020 legislation session to protect the privacy of our COVID data. AB 1782 (Chau/Wicks) would have ensured that any exposure notification program in the state included much-needed privacy protections for Californians at work and at home. AB 660 (Levine) would have provided related protections for manual contact tracing programs. Together, these two bills would have ensured COVID tracking programs in the state could not exploit data for other uses, including for marketing purposes, and guaranteed every Californian had the right to sue in case of a privacy violation.

Unfortunately, both bills recently died in the California Senate Appropriations committee, chaired by Sen. Anthony Portantino. This is a disappointing failure to protect the privacy of Californians and thereby advance public health. But while the legislature stalled efforts to protect our privacy, the need for these protections is only growing.

The letter from legislators suggests that Google and Apple may be willing to create a pilot program “for the State free of charge.” As the lawmakers wrote: “We caution that while contracting these companies to create the application may not cost the state financially, the Legislators and advocates attending closely to these issues over the years have learned that no such venture is truly free. Often times, products or services offered for ‘free’ are paid for through the surrender of sensitive personal information.”

Indeed, companies and governments have proven time and again that they cannot be trusted to do the right thing even — sometimes especially — when people are at their most vulnerable. Absent state protections, data collection programs administered by local governments, or by the private sector, face few limits or guarantees that the data will only be used for its intended purposes.

In addition, employees have few protections from employers who may wish to use information collected as part of pandemic response to track who their employees are talking to or to measure their productivity. And there are no protections to protect Californians—at work or not—from being discriminated against for choosing not to participate in such programs.

Pinky promises aren’t enough. We need legally binding rules. As the state prepares to launch a program to integrate technology into its pandemic response, it is more important than ever that the California governor do the right thing.

Hayley Tsukayama

Human Rights and TPMs: Lessons from 22 Years of the U.S. DMCA

2 months 2 weeks ago

In 1998, Bill Clinton signed the Digital Millennium Copyright Act (DMCA), a sweeping overhaul of U.S. copyright law notionally designed to update the system for the digital era. Though the DMCA contains many controversial sections, one of the most pernicious and problematic elements of the law is Section 1201, the "anti-circumvention" rule which prohibits bypassing, removing, or revealing defects in "technical protection measures" (TPMs) that control not just use but also access to copyrighted works.

In drafting this provision, Congress ostensibly believed it was preserving fair use and free expression but failed to understand how the new law would interact with technology in the real world and how some courts could interpret the law to drastically expand the power of copyright owners. Appellate courts disagree about the scope of the law, and the uncertainty and the threat of lawsuits have meant that rightsholders have been able to effectively exert control over legitimate activities that have nothing to do with infringement, to the detriment of basic human rights.. Manufacturers who designed their products with TPMs that protected business models, rather than profits, can claim that using those products in ways that benefited their customers, (rather than their shareholders) is illegal.

22 years later, TPMs are everywhere, sometimes called "DRM" ("digital rights management"). TPMs control who can fix cars and tractors, who can audit the security of medical implants, who can refill a printer cartridge and whether you can store a cable broadcast and what you can do with it.

Last month, the Mexican Congress passed amendments to the Federal Copyright Law and the Federal Criminal Code, notionally to comply with the country's treaty obligations under Donald Trump's USMCA, the successor to NAFTA. This law included many provisions that interfered with human rights, so much so that the Mexican National Commission for Human Rights has filed a constitutional challenge before the Supreme Court seeking to annul these amendments.

Among the gravest of the defects in the new amendments to the Mexican copyright law and the Federal Criminal Code are the rules regarding TPMs, which replicate the defects in DMCA 1201. Notably, the new law does not address the flawed language of the DMCA that has allowed rightsholders to block legitimate and noninfringing uses of copyrighted works that depend on circumvention and creates harsh and disproportionate criminal penalties that creates unintended consequences for privacy and freedom of expression . Such criminal provisions are so broad and vague that it can be applied to any person, even the owner of the device, even if that person hasn’t committed any malicious intent to commit a wrongful act that will result in harm to another. To make things worse, the Mexican law does not provide even the inadequate protections the US version offers, such as an explicit, regular regulatory proceeding that creates exemptions for areas where the law is provably creating harms.

As with DMCA 1201, the new amendments to the Mexican copyright law contains language that superficially appears to address these concerns; however, as with DMCA 1201, the Mexican law's safeguard provisions are entirely cosmetic, so burdened with narrow definitions and onerous conditions that they are unusable. That is why, in 22 years of DMCA 1201, no one has ever successfully invoked the exemptions written into the statute.

EFF has had 22 years of experience with the fallout from DMCA 1201. In this article, we offer our hard-won expertise to our colleagues in Mexican civil society, industry, lawmaking and to the Mexican public.

Below, we have set out examples of how DMCA 1201 -- and its Mexican equivalent -- is incompatible with human rights, including free expression, self-determination, the rights of people with disabilities, cybersecurity, education, and archiving; as well as the law's consequences for Mexico's national resiliency and economic competitiveness and food- and health-security.

Free Expression

Copyright and free expression are in obvious tension with one another: the former grants creators exclusive rights to reproduce and build upon expressive materials; the latter demands the least-possible restrictions on who can express themselves and how.

Balancing these two priorities is a delicate act, and while different countries manage their limitations and exceptions to copyright differently -- fair use, fair dealing, derecho de autor, and more -- these systems typically require a subjective, qualitative judgment in order to evaluate whether a use falls into one of the exempted categories: for example, the widespread exemptions for parody or commentary, or rules that give broad latitude to uses that are "transformative" or "critical." These are rules that are designed to be interpreted by humans -- ultimately by judges.

TPM rules that have no nexus with copyright infringement vaporize the vital qualitative considerations in copyright's free expression exemptions, leaving behind a quantitative residue that is easy for computers to act upon, but which does not correspond closely to the policy objectives of limitations in copyright.

For example, a computer can tell if a video includes more than 25 frames of another video, or if the other works included in its composition do not exceed 10 percent of its total running time. But the computer cannot tell if the material that has been incorporated is there for parody, or commentary, or education -- or if the video-editor absentmindedly dragged a video-clip from another project into the file before publishing it.

And in truth, when TPMs collide with copyright exemptions, they are rarely even this nuanced.

Take the TPMs that prevent recording or duplication of videos, beginning with CSS, the system used in the first generation of DVD players, and continuing through the suite of video TPMs, including AACS (Blu-Ray) and HDCP (display devices). These devices can't tell if you are making a recording in order to produce a critical or parodical video commentary. In 2018, the US Copyright Office recognized that these TPMs interfere with the legitimate free expression rights of the public and granted an exemption to DMCA 1201 permitting the public to bypass these TPMs in order to make otherwise lawful recordings.The Mexican version of the DMCA does not include a formal procedure for granting comparable exemptions.

Other times, TPMs collide with free expression by allowing third parties to interpose themselves between rightsholders and their audiences, preventing the former from selling their expressive works to the latter.

The most prominent example of this interference is to be found in Apple's App Store, the official monopoly retailer for apps that can run on Apple's iOS devices, such as iPhones, iPads, Apple Watches, and iPods. Apple's devices use TPMs that prevent owners of these devices from choosing to acquire software from rivals of the App Store. As a result, Apple's editorial choices about which apps it includes in the App Store have the force of law. For an Apple customer to acquire an app from someone other than Apple, they must bypass the TPM on their device. Though we have won the right for customers to “jailbreak” their devices, anyone who sells them a tool to effect this ommits a felony under DMCA 1201 and risks both a five-year prison sentence and a $500,000 fine (for a first offense).

While the recent dispute with Epic Games has highlighted the economic dimension of this system (Epic objects to paying a 30 percent commission to Apple for transactions related to its game Fortnite), there are many historic examples of pure content-based restrictions on Apple's part:

In these cases, Apple's TPM interferes with speech in ways that are far more grave than merely blocking recording to advantage rightsholders. Rather, Apple is using TPMs backed by DMCA 1201 to interfere with rightsholders as well. Thanks to DMCA 1201, the creator of an app and a person who wants to use that app on a device that they own cannot transact without Apple's approval.

If Apple withholds that approval, the owner of the device and the creator of the copyrighted work are not allowed to consummate their arrangement, unless they bypass a TPM. Recall that commercial trafficking in TPM-circumvention tools is a serious crime under DMCA 1201, carrying a penalty of a five year prison sentence and a $500,000 fine for a first criminal offense, even if those tools are used to allow rightsholders to share works with their audiences.

In the years since Apple perfected the App Store model, many manufacturers have replicated it, for categories of devices as diverse as games consoles, cars and tractors, thermostats and toys. In each of these domains -- as with Apple's App Store -- DMCA 1201 interferes with free expression in arbitrary and anticompetitive ways.

Self Determination

What is a "family?"

Human social arrangements don't map well to rigid categories. Digital systems can take account of the indeterminacy of these social connections by allowing their users to articulate the ambiguous and complex nature of their lives within a database. For example, a system could allow users to enter several names of arbitrary length to accommodate the common experience of being called different things by different people, or it could allow them to define their own familial relationships, declaring the people they live with as siblings to be their "brothers" or "sisters" -- or declaring an estranged parent to be a stranger, or a re-married parent's spouse to be a "mother."

But when TPMs enter the picture, these necessary and beneficial social complexities are collapsed down into a set of binary conditions, fenced in by the biases and experiences of their designers. These systems are suspicious of their users, designed to prevent "cheating," and they treat attempts to straddle their rigid categorical lines as evidence of dishonesty -- not as evidence that the system is too narrow to accommodate its users' lived experience.

One such example is CPCM, the "Content Protection and Copy Management component of DVB, a standard for digital television broadcasts used all over the world.

CPCM relies on the concept of an "authorized domain" that serves as a proxy for a single family. Devices designated as belonging to an "authorized domain" can share video recordings freely with one another, but may not share videos with people from outside the domain -- that is, with people who are not part of their family.

The committee that designed the authorized domain was composed almost exclusively of European and US technology, broadcast, and media executives, and they took pains to design a system that was flexible enough to accommodate their lived experience.

If you have a private boat, or a luxury car with its own internal entertainment system, or a summer house in another country, the Authorized Domain is smart enough to understand that all these are part of a single family and will permit content to move seamlessly between them.

But the Authorized Domain is far less forgiving to families that have members who live abroad as migrant workers, or who are part of the informal economy in another state or country, or nomads who travel through the year with a harvest. These "families" are not recognized as such by DVB-CPCM, even though there are far more families in their situation than there are families with summer homes in the Riviera.

All of this would add up to little more than a bad technology design, except for DMCA 1201 and other anti-circumvention laws.

Because of these laws -- including Mexico's new copyright law -- defeating CPCM in order to allow a family member to share content with you is itself a potential offense, and selling a tool to enable this is a potential criminal offense, carrying a five-year sentence and a $500,000 fine for a first offense.

Mexico's familial relations should be defined by Mexican lawmakers and Mexican courts and the Mexican people -- not by wealthy executives from the global north meeting in board-rooms half a world away.

The Rights of People With Disabilities

Though disabilities are lumped into broad categories -- "motor disabilities," "blindness," "deafness," and so on -- the capabilities and challenges of each person with a disability are as unique as the capabilities and challenges faced by each able-bodied person.

That is why the core of accessibility isn't one-size-fits-all "accommodations" for people with disabilities; rather, it is "universal design" is "design of systems so that they can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability."

The more a system can be altered by its user, the more accessible it is. Designers can and should build in controls and adaptations, from closed captions to the ability to magnify text or increase its contrast, but just as important is to leave the system open-ended, so that people whose needs were not anticipated during the design phase can suit them to their needs, or recruit others to do so for them.

This is incompatible with TPMs. TPMs are designed to prevent their users from modifying them. After all, if users could modify TPMs, they could subvert their controls.

Accessibility is important for people with disabilities, but it is also a great boon to able-bodied people: first, because many of us are merely "temporarily able-bodied" and will have to contend with some disability during our lives; and second, because flexible systems can accommodate use-cases that designers have not anticipated that able-bodied people also value: from the TV set with captions turned on in a noisy bar (or for language-learners) to the screen magnifiers used by people who have mislaid their glasses.

Like able-bodied people, many people with disabilities are able to effect modifications and improvements in their own tools. However, most people -- whether they are able-bodied and people with disabilities -- rely on third parties to modify the systems they rely on because they lack the skill or time to make these modifications themselves.

That is why DMCA 1201's prohibition on "trafficking in circumvention devices" is so punitive: it not only deprives programmers of the right to improve their tools, but it also deprives the rest of us of the right to benefit from those programmers' creations, and programmers who dare defy this stricture face lengthy prison sentences and giant fines if they are prosecuted.

Recent examples of TPMs interfering with disabilities reveal how confining DMCA 1201 is for people with disabilities.

In 2017, the World Wide Web Consortium (W3C) approved a controversial TPM for videos on the Web called Encrypted Media Extensions (EME). EME makes some affordances for people with disabilities, but it lacks other important features. For example, people with photosensitive epilepsy cannot use automated tools to identify and skip past strobing effects in videos that could trigger dangerous seizures, while color-blind people can't alter the color-palette of the videos to correct for their deficit.

A more recent example comes from the med-tech giant Abbott Labs, which used DMCA 1201 to suppress a tool that allowed people with diabetes to link their glucose monitors to their insulin pumps, in order to automatically calculate and administer doses of insulin in an "artificial pancreas."

Note that there is no copyright infringement in any of these examples: monitoring your blood sugar, skipping past seizure-inducing video effects, or changing colors to a range you can perceive do not violate anyone's rights under US copyright law. These are merely activities that are dispreferred by manufacturers.

Normally, a manufacturer's preference is subsidiary to the interests of the owner of a product, but not in this case. Once a product is designed so that you must bypass a TPM to use it in ways the manufacturer doesn't like, DMCA 1201 gives the manufacturer's preferences the force of law,


In 1991, the science fiction writer Bruce Sterling gave a keynote address to the Game Developer's Conference in which he described the assembled game creators as practitioners without a history, whose work crumbled under their feet as fast as they could create it: "Every time a [game] platform vanishes it's like a little cultural apocalypse. And I can imagine a time when all the current platforms might vanish, and then what the hell becomes of your entire mode of expression?"

Sterling contrasted the creative context of software developers with authors: authors straddle a vast midden of historical material that they -- and everyone else -- can access. But in 1991, as computers and consoles were appearing and disappearing at bewildering speed, the software author had no history to refer to: the works of their forebears were lost to the ages, no longer accessible thanks to the disappearance of the hardware needed to run them.

Today, Sterling's characterization rings hollow. Software authors, particularly games developers, have access to the entire corpus of their industry, playable on modern computers, thanks to the rise and rise of "emulators" -- programs that simulate primitive, obsolete hardware on modern equipment that is orders of magnitude more powerful.

However, preserving the history of an otherwise ephemeral medium was not for the faint of heart. From the earliest days of commercial software, companies have deployed TPMs to prevent their customers from duplicating their products or running them without authorization. Preserving the history of software is impossible without bypassing TPMs, and bypassing TPMs is a potential felony that can send you to prison for five years and/or cost you half a million dollars if you supply a tool to do so.

That is why the US Copyright Office has repeatedly granted exemptions to DMCA 1201, permitting archivists in the United States to bypass software TPMs for preservation purposes.

Of course, it's not merely software that is routinely restricted with TPMs, frustrating the efforts of archivists: from music to movies, books to sound recordings, TPMs are routine. Needless to say, these TPMs interfere with routine, vital archiving activities just as much as they interfere with the archiving and preservation of software.


Copyright systems around the world create exemptions for educational activities; U.S. copyright law specifically mentions education in the criteria for exempted use.

But educators frequently run up against the blunt, indiscriminate restrictions imposed by TPMs, whose code cannot distinguish between someone engaged in educational activities and someone engaged in noneducational activities.

Educators' conflicts with TPMs are many and varied: a teacher may build a lesson plan around an online video but be unable to act on it if the video is removed; in the absence of a TPM, the teacher could make a local copy of the video as a fallback.

For a decade, the U.S. Copyright Office has affirmed the need for educators to bypass TPMs in order to engage in normal pedagogical activities, most notably the need for film professors to bypass TPMs in order to teach their students and so that their students can analyze and edit commercial films as part of their studies.

National Resiliency

Thus far, this article has focused on the TPMs' impact on individual human rights, but human rights are dependent on the health and resiliency of the national territory in which they are exercised. Nutrition, health, and security are human rights just as surely as free speech, privacy and accessibility.

The pandemic has revealed the brittleness and transience of seemingly robust supply chains and firms. Access to replacement parts and skilled technicians has been disrupted and firms have failed, taking down their servers and leaving digital tools in unusable or partially unusable states.

But TPMs don't understand pandemics or other emergencies: they enforce restrictions irrespective of the circumstances on the ground. And where laws like DMCA 1201 prevent the development of tools and knowledge for bypassing TPMs, these indiscriminate restrictions take on the force of law and acquire a terrible durability, as few firms or even individuals are willing to risk prison and fines to supply the tools to make repairs to devices that are locked with TPMs.

Nowhere is this more visible than in agriculture, where the markets for key inputs like heavy machinery, seeds and fertilizer have grown dangerously concentrated, depriving farmers of meaningful choice from competitors with distinctive offers.

Farmers work under severe constraints: they work in rural, inaccessible territories, far from authorized service depots, and the imperatives of the living organisms they cultivate cannot be argued with. When your crop is ripe, it must be harvested -- and that goes double if there's a storm on the horizon.

That's why TPMs in tractors constitute a severe threat to national resiliency, threatening the food supply itself. Ag-tech giant John Deere has repeatedly asserted that farmers may not effect their own tractor repairs, insisting that these repairs are illegal unless they are finalized by an authorized technician who can take days to arrive (even when there isn't a pandemic), and who charge hundreds of dollars to inspect the farmer's own repairs and type an unlock code into the tractor's keyboard.

John Deere's position is that farmers are not qualified and should not be permitted to repair their own property. However, farmers have been fixing their own equipment for as long as agriculture has existed -- every farm has a workshop and sometimes even a forge. Indeed, John Deere's current designs are descended from modifications that farmers themselves made to earlier models: Deere used to dispatch field engineers to visit farms and copy farmers' innovations for future models.

This points to another key feature for national resiliency: adaptation. Just as every person has unique needs that cannot be fully predicted and accounted for by product designers, so too does every agricultural context. Every plot of land has its own biodynamics, from soil composition to climate to labor conditions, and farmers have always adapted their tools to suit their needs. Multinational ag-tech companies can profitably target the conditions of the wealthiest farmers, but if you fall too far outside the median use-case, the parameters of your tractor are unlikely to fully suit your needs. That is why farmers are so accustomed to adapting their equipment.

To be clear, John Deere's restrictions do not prevent farmers from modifying their tractors -- they merely put those farmers in legal peril. Instead, farmers have turned to black market Ukrainian replacement software for their tractors; no one knows who made this software, it comes with no guarantees, and if it contained malicious or defective code, there would be no one to sue.

And John Deere's abuse of TPMs doesn't stop at repairs. Tractors contain sophisticated sensors that can map out soil conditions to a high degree of accuracy, measuring humidity, density and other factors and plotting them on a centimeter-accurate grid. This data is automatically generated by farmers driving tractors around their own fields, but the data does not go to the farmer. Rather, John Deere harvests the data that farmers generate while harvesting their crops and builds up detailed pictures of regional soil conditions that the company sells as market intelligence to the financial markets for bets in crop futures.

That data is useful to the farmers who generated it: accurate soil data is needed for "precision agriculture," which improves crop yields by matching planting, fertilizing and watering to soil conditions. Farmers can access a small slice of that data, but only through an app that comes bundled with seed from Bayer-Monsanto. Competing seed companies, including domestic seed providers, cannot make comparable offers.

Again, this is bad enough under normal conditions, but when supply chains fail, the TPMs that enforce these restrictions prevent local suppliers from filling in the gaps.

Right to Repair

TPMs don't just interfere with ag-tech repairs: dominant firms in every sector have come to realize that repairs are a doubly lucrative nexus of control. First, companies that control repairs can extract money from their customers by charging high prices to fix their property and by forcing customers to use high-priced manufacturer-approved replacement parts in those repairs; and second, companies can unilaterally declare some consumer equipment to be beyond repair and demand that they pay to replace it.

Apple spent lavishly in 2018 on a campaign that stalled 20 state-level Right to Repair bills in the U.S.A., and, in his first shareholder address of 2019, Apple CEO Tim Cook warned that a major risk to Apple's profitability came from consumers who chose to repair, rather than replace, their old phones, tablets and laptops.

The Right to Repair is key to economic self-determination at any time, but in times of global or local crisis, when supply chains shatter, repair becomes a necessity. Alas, the sectors most committed to thwarting independent repair are also sectors whose products are most critical to weathering crises.

Take the automotive sector: manufacturers in this increasingly concentrated sector have used TPMs to prevent independent repair, from scrambling the diagnostic codes used on cars' internal communications networks to adding "security chips" to engine parts that prevent technicians from using functionally equivalent replacement parts from competing manufacturers.

The issue has simmered for a long time: in 2012, voters in the Commonwealth of Massachusetts overwhelmingly backed a ballot initiative that safeguarded the rights of drivers to choose their own mechanics, prompting the legislature to enact a right-to-repair law. However, manufacturers responded to this legal constraint by deploying TPMs that allow them to comply with the letter of the 2012 law while still preventing independent repair. The situation is so dire that Massachusetts voters have put another ballot initiative on this year's ballot, which would force automotive companies to disable TPMs in order to enable independent repair.

It's bad enough to lose your car while a pandemic has shut down public transit, but it's not just drivers who need the Right to Repair: it's also hospitals.

Medtronic is the world's largest manufacturer of ventilators. For 20 years, it has manufactured the workhorse Puritan Bennett 840 ventilator, but recently the company added a TPM to its ventilator design. The TPM prevents technicians from repairing a ventilator with a broken screen by swapping in a screen from another broken ventilator; this kind of parts-reuse is common, and authorized Medtronic technicians can refurbish a broken ventilator this way because they have the code to unlock the ventilator.

There is a thriving secondary market for broken ventilators, but refurbishers who need to transplant a monitor from one ventilator to another must bypass Medtronic's TPM. To do this, they rely on a single Polish technician who manufacturers a circumvention device and ships it to medical technicians around the world to help them with their repairs.

Medtronic strenuously objects to this practice and warns technicians that unauthorized repairs could expose patients to risk -- we assume that the patients whose lives were saved by refurbished ventilators are unimpressed by this argument. In a cruel twist of irony, the anti-repair Medtronic was founded in 1949 as a medical equipment repair business that effected unauthorized repairs.


In the security field, it's a truism that "there is no security in obscurity" -- or, as cryptographer Bruce Schneier puts it, "anyone can design a system that they can't think of a way around. That doesn't mean it's secure, it just means it's secure against people stupider than you."

Another truism in security is that "security is a process, not a product." You can never know if a system is secure -- all you can know is whether any defects have been discovered in it. Grave defects have been discovered even very mature, widely used systems that have been in use for decades.

The corollary of these two rules is that security requires that systems be open to auditing by as many third parties as possible, because the people who designed those systems are blind to their own mistakes, and because each auditor brings their own blind spots to the exercise.

But when a system has TPMs, they often interfere with security auditing, and, more importantly, security disclosures. TPMs are widely used in embedded systems to prevent competitors from creating interoperable products -- think of inkjet printers using TPMs to detect and reject third-party ink cartridges -- and when security researchers bypass these to investigate products, their reports can run afoul of DMCA 1201. Revealing a defect in a TPM, after all, can help attackers disable that TPM, and thus constitutes "circumvention" information. Recall that supplying “circumvention devices” to the public is a criminal offense under DMCA 1201.

This problem is so pronounced that in 2018, the US Copyright Office granted an exemption to DMCA 1201 for security researchers.

However, that exemption is not broad enough to encompass all security research. A coalition of security researchers is returning to the Copyright Office this rulemaking to explain again why regulators have been wrong to impose restrictions on legitimate research.


Firms use TPMs in three socially harmful ways:

  1. Controlling customers: From limiting repairs to forcing the purchase of expensive spares and consumables to arbitrarily blocking apps, firms can use TPMs to compel their customers to behave in ways that put corporate interests above the interests of their customers;
  2. Controlling critics: DMCA 1201 means that when a security researcher discovers a defect in a product, the manufacturer can exercise a veto over the disclosure of the defect by threatening legal action;
  3. Controlling competitors: DMCA 1201 allows firms to unilaterally decide whether a competitor's parts, apps, features and services are available to its customers.

This concluding section delves into three key examples of TPMs' interference with competitive markets.

App Stores

In principle, there is nothing wrong with a manufacturer "curating" a collection of software for its products that are tested and certified to be of high quality. However, when devices are designed so that using a rival's app store requires bypassing a TPM, manufacturers can exercise a curator's veto, blocking rival apps on the basis that they compete with the manufacturer's own services.

The most familiar example of this is Apple's repeated decision to block rivals on the grounds that they offer alternative payment mechanisms that bypass Apple's own payment system and thus evade paying a commission to Apple. Recent high-profile examples include the HEY! email app, and the bestselling Fortnite app.

Streaming media

This plays out in other device categories as well, notably streaming video: AT&T's HBO Max is deliberately incompatible with leading video-to-TV bridges such as Amazon Fire and Roku TV, who command 70% of the market. The Fire and Roku are often integrated directly into televisions, meaning that HBO Max customers must purchase additional hardware to watch the TV they're already paying for on their own television sets. To make matters worse, HBO has cancelled its HBO Go service, which enabled people who paid for HBO over satellite and cable to watch programming on Roku and Amazon devices .


TPMs also allow for the formation of cartels that can collude to exclude entire development methodologies from a market and to deliver control over the market to a single company. For example, the W3C's Encrypted Media Extensions (see "The Rights of People With Disabilities," above) is a standard for streaming video to web browsers.

However, EME is designed so that it does not constitute a complete technical solution: every browser vendor that implements EME must also separately license a proprietary descrambling component called a "content decryption module" (CDM).

In practice, only one company makes a licensable CDM: Google, whose "Widevine" technology must be licensed in order to display commercial videos from companies like Netflix, Amazon Prime and other market leaders in a browser.

However, Google will not license this technology to free/open source browsers except for those based on its own Chrome/Chromium browser. In standardizing a TPM for browsers, the W3C -- and Section 1201 of the DMCA -- has delivered gatekeeper status to Google, who now get to decide who may enter the browser market that it dominates; rivals that attempt to implement a CDM without Google’s permission risk prison sentences and large fines.


The U.S.A. has had 22 years of experience with legal protections for TPMs under Section 1201 in the DMCA. In that time, the U.S. government has repeatedly documented multiple ways in which TPMs interfere with basic human rights and the systems that permit their exercise. The Mexican Supreme Court has now taken up the question of whether Mexico can follow the U.S.'s example and establish a comparable regime in accordance with the rights recognized by the Mexican Constitution and international human rights law. In this document, we provide evidence that TPM regimes are incompatible with this goal.

The Mexican Congress -- and the U.S. Congress -- could do much to improve this situation by tying offenses under TPM law to actual acts of copyright violation. As the above has demonstrated, the most grave abuses of TPMs stem from their use to interfere with activities that do not infringe copyright.

However, rightsholders already have a remedy for copyright infringements: copyright law. A separate liability regime for TPM circumvention serves no legitimate purpose. Rather, its burden falls squarely on people who want to stay on the right side of the law and find that their important, legitimate activities and expression are put in legal peril.

Related Cases: Green v. U.S. Department of Justice
Cory Doctorow

Portland’s Fight Against Face Surveillance

2 months 2 weeks ago

This Wednesday, the Portland City Council will hear from residents, businesses, and civil society as they consider banning government use of face recognition technology within the city.

Over 150 Portland-area business owners, technologists, workers, and residents have signed our About Face petition calling for an end to government use of face surveillance. This week, a coalition of local and national civil society organizations led by Electronic Frontier Alliance (EFA) members PDX Privacy and Portland's Techno-Activism Third Mondays delivered that petition to the council, noting that "even if the technology someday functions flawlessly, automated surveillance and collection of biometric data will still violate our personal privacy and conflict with the City's own privacy principles." 


End Face Surveillance in your community

The proposed ban on government use of face surveillance makes critical steps forward in protecting Portland residents. As a result of federal grants and gifting through the Department of Defense's 1033 program, budgeting systems that once provided some measure of transparency—and an opportunity for accountability—have been circumvented by police departments across the country as they build arsenals of powerful technology. Meanwhile, lawmakers and the public are kept in the dark. Once passed, Portland's ordinance will prohibit city bureaus from purchasing, leasing, or accepting face recognition technology as a donation or gift. It will also prohibit city bureaus from directing non-city entities to acquire or use the technology on the city's behalf. 

The ordinance also provides a path toward protections against government use of other kinds of privacy-invasive surveillance technology, beyond face surveillance. Specifically, the ordinance tasks Portland's Bureau of Planning and Sustainability with proposing a framework for the establishment of citywide privacy policies and procedures. These would include a public engagement process focusing on underserved communities, and the development of decision-making structures for managing city data and information acquired through the use of surveillance technology. 

The Portland City Council will also consider a second ordinance on face surveillance, which addresses private sector use of the technology. Specifically, it would ban use of face surveillance by private parties in places of public accommodation. The better approach to private sector use is through the requirement of opt-in consent, as is required by Illinois' Biometric Information Privacy Act and Senator Jeff Merkley's (D-OR) proposed National Biometric Information Privacy Act. We made the same point in our June letter to the Boston City Council about its face recognition ordinance. 

On the ban on government use of face surveillance, there is still significant room for improvement as to enforcement. It requires that a violation of the ordinance harm a person before they can initiate the enforcement process. But to protect the public before they have been injured by a technology that threatens their privacy, safety, and fundamental freedoms, a person must be able to initiate an enforcement action before they experience harm. In the words of Brian Hofer, Chair of the Privacy Advisory Commission in Oakland, California—which are included in the package being considered by Portland Commissioners this week—the "current enforcement mechanism will likely not provide much protection because A) we typically only learn of harm from surveillance long after the fact, and B) this technology works at a distance, in secret, and thus an injured party will almost never discover that they were subject to its use." Hofer suggests that the language be amended to more closely mirror that of Oakland's own ban, which does not require an individual to prove that they have been personally harmed. Oakland's ordinance also provides for damages to be awarded to those who are harmed, another provision that would improve Portland's bill.


End Face Surveillance in your community

Moreover, the enforcement provisions of many of the existing local bans on face surveillance include an essential provision notably missing from Portland's ban: the city must pay the attorney fees of a prevailing plaintiff. Commonly referred to as 'fee-shifting,' this rule helps level the playing field between a resourced government and an individual looking to hold it accountable by eliminating the financial barrier to finding legal assistance. 

The work of Portland lawmakers and the city's Bureau of Planning and Sustainability to protect city residents from the harms of face surveillance is commendable. Commissioners must also work to ensure that the city's ban provides appropriate protections and accessible enforcement mechanisms. From San Francisco to Boston; Portland to Durham; communities are coming together to protect themselves from the harms of the ever-expanding panopticon of unwarranted government surveillance. Through efforts like our About Face campaign, and alongside our EFA allies, we will continue the push for stronger, enforceable protections. If your community-based group or hackerspace would like to join us in ending government use of face surveillance in your community, please add your name to the About Face petition and encourage your group to consider joining the Alliance.

Nathan Sheard

Exposure Notification Technology is Ready for Its Closeup

2 months 2 weeks ago

Since this COVID-19 crisis began people have looked to technology to assist in contact tracing and notification. Technology will never be a silver bullet to solve a deeply human crisis, even if it might assist. No app will work absent widespread testing with human follow up. Smartphones are not in the hands of everyone, so app-based COVID-19 assistance can reinforce or exacerbate existing social inequalities. 

De-centralized Bluetooth proximity tracking is the most promising approach so far to automated COVID-19 exposure notification. Most prominently, back in April, Apple and Google unveiled a Bluetooth exposure notification API for detecting whether you were in proximity to someone with COVID-19, and sending you a notice.  

Over the last month, we have seen a number of contact tracing and exposure notification apps released, including several from public health authorities using the Google-Apple Exposure Notification (GAEN) Bluetooth proximity technology. These include North Dakota Care19Wyoming Care19 Alert, Alabama Guidesafe, and Nevada COVID Trace. Some, like Canada’s Covid Alert and Virginia Covidwise, have gotten good reviews for privacy and security.

Other new apps are more concerning. Albion College required students to download and install a private party tracking app called Aura, which uses GPS location data and had security flawsCitizen, a very popular safety alert app, has added a Bluetooth-based SafePath technology. Since Citizen itself uses GPS, this raises the risk of connecting the location data to the COVID-19 data. To mitigate this concern on iOS, one has to use an add-on app, SafeTrace, which will separate the GPS used by Citizen and the bluetooth data from SafeTrace, but the technology is integrated in Android. 

Ultimately, many people may end up participating without choosing an app. Last week, Apple rolled out iOS 13.7 which allows users to choose to participate in the Apple-Google Bluetooth exposure notification system without an app, via Exposure Notifications Express (ESE). Google will be implementing a similar technology in Android 6.0 later this month, creating an auto-generated app for the local public health authority.  Independent apps will still be allowed to use the GAEN system, but the easy path for most smartphone users will be to the Apple-Google ESE system.

Whether considering a new app or the app-less system, we must not lose sight of the challenges of proximity apps, and be sure they are safe, secure and respect fundamental human rights. In summary, consent is critical, no one should be forced to use the app, and users should be able to opt-in and opt-out as needed. Strong privacy and security safeguards are also necessary. Fear of disclosure of your proximity or, worse, your location data, could harm effectiveness (insufficient adoption) and chill expressive activity. All exposure notification technologies need rigorous security testing and data minimization.

Kurt Opsahl

EFF Responds to EU Commission on the Digital Services Act: Put Users Back in Control

2 months 3 weeks ago

The European Union is currently preparing for a  significant overhaul of its core platform regulation, the e-Commerce Directive. Earlier this year the European Commission, the EU’s executive, pledged to reshape Europe’s digital future and to propose an entire package of new rules, the Digital Services Act (DSA). The package is supposed to address the legal responsibilities of platforms regarding user content and include measures to keep users safe online. The Commission also announced a new standard for large platforms that act as gatekeepers in an attempt to create a fairer, and more competitive, market for online platforms in the EU.

Preserve What Works

While the European Commission has not yet published its proposal for the DSA, the current preparatory phase is an important opportunity to expose the Commission to diverse insights on the complex issues the DSA will cover. Alongside our European partners, we have therefore contributed to the Commission’s consultation that will feed into the assessment of the different regulatory options available. In our response, we  remind the Commission of some of the aspects of the e-Commerce Directive that have been crucial for the growth of the online economy, and the protection of fundamental rights in the EU: it is essential to retain the Directive’s approach of limiting platforms’ liability over user content and banning Member States from imposing obligations to track and monitor users’ content.

Fix What Is Broken

But the DSA should not only preserve what was good about the old Directive. It is also a chance to boldly imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, and can have more control over how they interact with content. That should include measures to make the use of algorithms more transparent, but must also allow people to choose for themselves whether they want algorithms to curate their feeds at all. Beyond giving users the rights and options they deserve, it is time to re-think the Internet more fundamentally. That’s why we propose interoperability obligations for large platforms. Flanked by strong privacy and security safeguards, a European commitment to interoperability could empower users to shape their online environments according to their needs and preferences, will allow people to connect with each other beyond the walled gardens of the largest platforms, and will reinvigorate the digital economy.

Have Your Say Too

There is still time to respond to the consultation until 8 September, and we invite you to join us in our call for an open and safe Internet that empowers users. You can submit your comments to the European Commission’s public consultation here.

Our main demands regarding interoperability are:

  1. Platforms with significant market power must offer non-discriminatory possibilities for competing, not-incumbent platforms to interoperate with their key features;
  2. Platforms with significant market power should make it possible for competing third parties to act on users’ behalf. If users want to, they should be able to delegate elements of their online experience to different competent actors;
  3. Interoperability measures must respect key privacy principles such as data minimization, privacy by design, and privacy by default;
  4. If intermediaries do have to suspend interoperability to fix security issues, they should not exploit such situations to break interoperability but rather communicate transparently, resolve the problem, and reinstate interoperability interfaces within a reasonable and clearly defined timeframe.

Our main demands regarding platform liability are:

  1. Online intermediaries should not be held liable for user content and should continue to benefit from the comprehensive liability exemptions contained in the e-Commerce Directive;
  2. It should be clarified that actual knowledge of illegality is only obtained by intermediaries if they are presented with a court order;
  3. The Member States of the EU should not be permitted to impose obligations on digital service providers to affirmatively monitor their platforms or networks for illegal content that users post, transmit, or store. The ban on general monitoring obligations should include a ban on mandated automated filter systems.
  4. The Internet is global and takedown orders of global reach are immensely unjust and impair users’ freedom. New rules should make sure that court orders—and particularly injunctions—should not be used to superimpose the laws of one country on every other state in the world.

Our main demands regarding user controls are:

  1. Users of social media platforms with significant market power should be empowered to choose content they want to interact with in a simple and user-friendly manner, and should have the option to decide against algorithmically-curated recommendations altogether;
  2. Online platforms should provide meaningful information about the algorithmic tools they use in content moderation and content curation. Users need easily accessible explanations to understand when, for which tasks, and to which extent algorithmic tools are used. Online platforms should also allow independent researchers and relevant regulators to audit their algorithmic tools to make sure they are used as intended;
  3. Users should be notified whenever the rules that govern them change, must be asked for their consent and should be informed of the consequences of their choice. They should also be provided with a meaningful explanation of any substantial changes in a language they understand;
  4. The Digital Services Act should affirm users’ informational self-determination and introduce the European right to anonymity online.

Our main demands for procedural justice are:

  1. The EU should adopt harmonized rules on reporting mechanisms that ensure that reporting potentially illegal content is easy, and any follow-up actions by the platform is transparent for its users;
  2. Platforms should provide users with a notice when content has been removed that identifies the content removed, the specific rule that it was found to violate, and how the content was detected. It should also offer an easily accessible explanation of the process through which the user can appeal the decision;
  3. If platforms use automated decision making to restrict content, they should flag at which step of the process algorithmic tools were used, explain the logic behind the automated decisions taken, and also explain how users can contest the decision;
  4. The Digital Services Act should promote quick and easy reinstatement of wrongfully removed content or wrongly disabled accounts.
Christoph Schmon

Technology Can’t Predict Crime, It Can Only Weaponize Proximity to Policing

2 months 3 weeks ago

Special thanks to Yael Grauer for additional writing and research.

In June 2020, Santa Cruz, California became the first city in the United States to ban municipal use of predictive policing, a method of deploying law enforcement resources according to data-driven analytics that supposedly are able to predict perpetrators, victims, or locations of future crimes. Especially interesting is that Santa Cruz was one of the first cities in the country to experiment with the technology when it piloted, and then adopted, a predictive policing program in 2011. That program used historic and current crime data to break down some areas of the city into 500 foot by 500 foot blocks in order to pinpoint locations that were likely to be the scene of future crimes. However, after nine years, the city council voted unanimously to ban it over fears of how it perpetuated racial inequality. 

Predictive policing is a self-fulfilling prophecy. If police focus their efforts in one neighborhood and arrest dozens of people there during the span of a week, the data will reflect that area as a hotbed of criminal activity. The system also considers only reported crime, which means that neighborhoods and communities where the police are called more often might see a higher likelihood of having predictive policing technology concentrate resources there. This system is tailor-made to further victimize communities that are already overpoliced—namely, communities of color, unhoused individuals, and immigrants—by using the cloak of scientific legitimacy and the supposed unbiased nature of data. 

Santa Cruz’s experiment, and eventual banning of the technology is a lesson to the rest of the country: technology is not a substitute for community engagement and holistic crime reduction measures. The more police departments rely on technology to dictate where to focus efforts and who to be suspicious of, the more harm those departments will cause to vulnerable communities. That’s why police departments should be banned from using supposedly data-informed algorithms to inform which communities, and even which people, should receive the lion’s share of policing and criminalization. 

What Is Predictive Policing?

The Santa Cruz ordinance banning predictive policing defines the technology as “means software that is used to predict information or trends about crime or criminality in the past or future, including but not limited to the characteristics or profile of any person(s) likely to commit a crime, the identity of any person(s) likely to commit crime, the locations or frequency of crime, or the person(s) impacted by predicted crime.”

Predictive policing analyzes a massive amount of information from historical crimes including the time of day, season of the year, weather patterns, types of victims, and types of location in order to infer when and in which locations crime is likely to occur. For instance, if a number of crimes have been committed in alleyways on Thursdays, the algorithm might tell a department they should dispatch officers to alleyways every Thursday. Of course, then this means that police are predisposed to be suspicious of everyone who happens to be in that area at that time. 

The technology attempts to function similarly while conducting the less prevalent “person-based” predictive policing. This takes the form of opaque rating systems that assign people a risk value based on a number of data streams including age, suspected gang affiliation, and the number of times a person has been a victim as well as an alleged perpetrator of a crime. The accumulated total of this data could result in someone being placed on a “hot list”, as happened to over 1,000 people in Chicago who were placed on one such “Strategic Subject List.” As when specific locations are targeted, this technology cannot actually predict crime—and in an attempt to do so, it may expose people to targeted police harassment or surveillance without any actual proof that a crime will be committed. 

There is a reason why the use of predictive policing continues to expand despite its dubious foundations: it makes money. Many companies have developed tools for data-driven policing; some of the biggest are PredPol, HunchLab, CivicScape, and Palantir. Academic institutions have also developed predictive policing technologies, such as Rutgers University’s RTM Diagnostics or Carnegie Mellon University’s CrimeScan, which is used in Pittsburgh. Some departments have built such tools with private companies and academic institutions. For example, in 2010, the Memphis Police Department built its own tool, in partnership with the University of Memphis Department of Criminology and Criminal Justice, using IBM SPSS predictive analytics. 

As of summer 2020, the technology is used in dozens of cities across the United States. 

What Problems Does it Pose?

One of the biggest flaws of predictive policing is the faulty data fed into the system. These algorithms depend on data informing them of where criminal activity has happened to predict where future criminal activity will take place. However, not all crime is recorded—some communities are more likely to report crime than others, some crimes are less likely to be reported than other crimes, and officers have discretion in deciding whether or not to make an arrest. Predictive policing only accounts for crimes that are reported, and concentrates policing resources in those communities, which then makes it more likely that police may uncover other crimes. This all creates a feedback loop that makes predictive policing a self-fulfilling prophecy. As professor Suresh Venkatasubramanian put it

If you build predictive policing, you are essentially sending police to certain neighborhoods based on what they told you—but that also means you’re not sending police to other neighborhoods because the system didn’t tell you to go there. If you assume that the data collection for your system is generated by police whom you sent to certain neighborhoods, then essentially your model is controlling the next round of data you get.

This feedback loop will impact vulnerable communities, including communities of color, unhoused communities, and immigrants.

Police are already policing minority neighborhoods and arresting people for things that may have gone unnoticed or unreported in less heavily patrolled neighborhoods. When this already skewed data is entered into a predictive algorithm, it will deploy more officers to the communities that are already overpoliced. 

A recent deep dive into the predictive program used by the Pasco County Sheriff's office illustrates the harms that getting stuck in an algorithmic loop can have on people. After one 15-year-old was arrested for stealing bicycles out of a garage, the algorithm continuously dispatched police to harass him and his family. Over the span of five months, police went to his home 21 times. They showed up at his gym and his parent’s place of work. The Tampa Bay Times revealed that since 2015, the sheriff's office has made more than 12,500 similar preemptive visits on people. 

These visits often resulted in other, unrelated arrests that further victimized families and added to the likelihood that they would be visited and harassed again. In one incident, the mother of a targeted teenager was issued a $2,500 fine when police sent to check in on her child saw chickens in the backyard. In another incident, a father was arrested when police looked through the window of the house and saw a 17-year-old smoking a cigarette. These are the kinds of usually unreported crimes that occur in all neighborhoods, across all economic strata—but which only those marginalized people who live under near constant policing are penalized for. 

As experts have pointed out, these algorithms often draw from flawed and non-transparent sources such as gang databases, which have been the subject of public scrutiny due to their lack of transparency and overinclusion of Black and Latinx people. In Los Angeles, for instance, if police notice a person wearing a sports jersey or having a brief conversation with someone on the street, it may be enough to include that person in the LAPD’s gang database. Being included in a gang database often means being exposed to more police harassment and surveillance, and also can lead to consequences once in the legal system, such as harsher sentences. Inclusion in a gang database can impact whether a predictive algorithm identifies a person as being a potential threat to society or artificially projects a specific crime as gang-related. In July 2020, the California Attorney General barred police in the state from accessing any of LAPD’s entries into the California gang database after LAPD officers were caught falsifying data. Unaccountable and overly broad gang databases are the type of flawed data flowing from police departments into predictive algorithms, and exactly why predictive policing cannot be trusted. 

To test racial disparities in predictive policing, Human Rights Data Analysis Group (HRDAG) looked at Oakland Police Department’s recorded drug crimes. It used a big data policing algorithm to determine where it would suggest that police look for future drug crimes. Sure enough, HRDAG found that the data-driven model would have focused almost exclusively on low-income communities of color. But public health data on drug users combined with U.S. Census data show that the distribution of drug users does not correlate with the program’s predictions, demonstrating that the algorithm’s predictions were rooted in bias rather than reality.

All of this is why a group of academic mathematicians recently declared a boycott against helping police create predictive policing tools. They argued that their credentials and expertise create a convenient way to smuggle racist ideas about who will commit a crime based on where they live and who they know, into the mainstream through scientific legitimacy. “It is simply too easy,” they write, “to create a 'scientific' veneer for racism.”

In addition, there is a disturbing lack of transparency surrounding many predictive policing tools. In many cases, it’s unclear how the algorithms are designed, what data is being used, and sometimes even what the system claims to predict. Vendors have sought non-disclosure clauses or otherwise shrouded their products in secrecy, citing trade secrets or business confidentiality. When data-driven policing tools are black boxes, it’s difficult to assess the risks of error rates, false positives, limits in programming capabilities, biased data, or even flaws in source code that affect search results. 

For local departments, the prohibitive cost of using these predictive technologies can also be a detriment to the maintenance of civil society. In Los Angeles, the LAPD paid $20 million over the course of nine years to use Palantir’s predictive technology alone. That’s only one of many tools used by the LAPD in an attempt to predict the future. 

Finally, predictive policing raises constitutional concerns. Simply living or spending time in a neighborhood or with certain people may draw suspicion from police or cause them to treat people as potential perpetrators. As legal scholar Andrew Guthrie Furgeson has written, there is tension between predictive policing and legal requirements that police possess reasonable suspicion to make a stop. Moreover, predictive policing systems sometimes utilize information from social media to assess whether a person might be likely to engage in crime, which also raises free speech issues.

Technology cannot predict crime, it can only weaponize a person’s proximity to police action. An individual should not have their presumption of innocence eroded because a casual acquaintance, family member, or neighbor commits a crime. This just opens up members of already vulnerable populations to more police harassment, erodes trust between public safety measures and the community, and ultimately creates more danger. This has already happened in Chicago, where the police surveil and monitor the social media of victims of crimes—because being a victim of a crime is one of the many factors Chicago’s predictive algorithm uses to determine if a person is at high risk of committing a crime themselves. 

What Can Be Done About It?

As the Santa Cruz ban suggests, cities are beginning to wise up to the dangers of predictive policing. As with the growing movement to ban government use of face recognition and other biometric surveillance, we should also seek bans on predictive policing. Across the country, from San Francisco to Boston, almost a dozen cities have banned police use of face recognition after recognizing its disproportionate impact on people of color, its tendency to falsely accuse people of crimes, its erosion of our presumption of innocence, and its ability to track our movements. 

Before predictive policing becomes even more widespread, cities should now take advantage of the opportunity to protect the well-being of their residents by passing ordinances that ban the use of this technology or prevent departments from acquiring it in the first place. If your town has legislation like a Community Control Over Police Surveillance (CCOPS) ordinance, which requires elected officials to approve police purchase and use of surveillance equipment, the acquisition of predictive policing can be blocked while attempts to ban the technology are made. 

The lessons from the novella and film Minority Report still apply, even in the age of big data: people are innocent until proven guilty. People should not be subject to harassment and surveillance because of their proximity to crime. For-profit software companies with secretive proprietary algorithms should not be creating black box crystal balls exempt from public scrutiny and used without constraint by law enforcement. It’s not too late to put the genie of predictive policing back in the bottle, and that is exactly what we should be urging local, state, and federal leaders to do.

Matthew Guariglia
2 hours 25 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed