Global Coalition Calls on UK Foreign Secretary to Secure the Release of Salma al-Shehab

1 month 2 weeks ago

15 October 2022 

Dear Foreign Secretary, 

On behalf of the below signed organisations, we would like to congratulate your appointment as Secretary of State for Foreign, Commonwealth and Development Affairs. At a time of significant global uncertainty and unrest, the UK can and must play a leading role in promoting human rights globally. While we appreciate the wide and diverse range of issues facing you and your department, we are contacting you today to draw your attention to the treatment of political prisoners in Saudi Arabia who have been imprisoned for expressing themselves.

The Specialized Criminal Court (SCC), established in 2008 to try those suspected of acts of terrorism, has instead administered disproportionate sentences, including the death sentence, to people solely for expressing themselves online. Cloaked in the language of cybercrime, this has effectively criminalised free expression and has also been brought to bear against individuals outside of Saudi Arabia. 

You will have heard about the shameful case of Saudi national Salma al-Shehab, who was a student at the University of Leeds at the time of her alleged ‘crimes’ – sharing content in support of prisoners of conscience and women human rights defenders, such as Loujain Alhathloul. For this, upon Salma al-Shehab’s return to Saudi Arabia, she was arrested and held arbitrarily for nearly a year, before being sentenced to 34 years in prison with a subsequent 34-year travel ban. The fact that the sentence is four years longer than the maximum sentence suggested by the country’s anti-terror laws for activities such as supplying explosives or hijacking an aircraft demonstrates the egregious and dangerous standard established both by the SCC and the Saudi regime to restrict free expression. It also further illustrates the Saudi government’s abusive system of surveillance and infiltration of social media platforms to silence public dissent.

But the actions aimed at Salma al-Shehab did not happen in isolation. In fact, her sentencing is the latest in a longstanding trend that has seen the Saudi judiciary and the state at-large being co-opted to target civil society and fundamental human rights. The same day that al-Shehab was sentenced, the SCC sentenced another woman, Nourah bint Saeed Al-Qahtani, to 45 years in prison after using social media to peacefully express her views. Ten Egyptian Nubians were sentenced to up to 18 years in prison after they were arrested and detained – for two months they were held incommunicado and without access to their lawyers or family – after organising a symposium commemorating the 1973 Arab-Israeli war. Dr Lina al-Sharif was arbitrarily detained for over a year following her social media activism after a group of agents of the Presidency of State Security raided her family home and arrested her without a warrant. A worrying dimension is the use of violence and torture to coerce confessions, as well as ongoing persecution or surveillance following a prisoner’s release, further eroding the legitimacy of the SCC and its verdicts. 

The UK’s close relationship with Saudi Arabia should not bind your hands to upholding human rights commitments and calling out violations when they are brought to your attention, particularly, in the case of al-Shehab, where they relate to the application of Saudi legislation for actions that took place within the territory of the United Kingdom. In fact, this relationship places you in a strong position to call for the release of all prisoners unlawfully held in Saudi Arabia without delay. 

Acting definitively so early in your tenure would be a powerful symbol both to our allies and others that the UK can be a trusted protector of human rights and the rule of law. 

We await your action on this important issue and further support the calls to action outlined by over 400 academics, staff and research students from UK universities and colleges in a letter authored to you and the Prime Minister. 

If you require any more information we would be happy to organise a briefing at a time that works best for you. 

Kind regards,

Index on Censorship

ALQST For Human Rights

SANAD Organisation for Human Rights

CIVICUS 

Electronic Frontier Foundation

Gulf Centre for Human Rights (GCHR)

SMEX 

Vigilance for Democracy and the Civic State

Access Now

Human Rights Watch

PEN International

English PEN

Front Line Defenders

IFEX

Paige Collings

Stop the Persecution: Iranian Authorities Must Immediately Release Technologists and Digital Rights Defenders

1 month 3 weeks ago

Update, November 9, 2022We are happy to announce that Aryan Eqbal has been released along with other digital rights defenders. Jadi Mirmirani remains wrongfully detained. We will continue to monitor the situation.

We, the undersigned human rights organizations, strongly condemn the Iranian authorities’ ruthless persecution, harassment, and arrest of technologists and digital rights defenders amid the deadly crackdown on nationwide protests, and demand their immediate and unconditional release. 

In an attempt to crush the popular uprising and further restrict internet activity and information flows, Iranian authorities are escalating their violent crackdown on people across Iran, and are now targeting internet experts and technologists. To date, Iranian authorities have arrested alarming numbers of tech engineers and network administrators who have been vocal on digital rights in Iran. Those detained have criticized internet restrictions, shown support to protests, or have been advocating for digital rights. We are concerned over the growing pressure on this community, including technology journalists and bloggers, and the suppression of their criticisms against authorities. Any attempts to investigate or bring transparency to issues of digital repression or protests are being brutally stamped out. The world cannot allow the Islamic Republic of Iran to normalize this kind of persecution. The government must release these detainees at once.

Well-known technologists, digital rights defenders, and internet access experts have been targeted for arrest by the authorities since the beginning of the protests following the death in police custody of 22 year-old Iranian Kurdish woman Mahsa (Jhina) Amini. 

On October 5, authorities arrested Amiremad (Jadi) Mirmirani—a blogger and one of Iran’s leading technologists and digital rights defenders. According to a family member on Instagram, authorities forcefully stormed into Mirmirani’s house and arbitrarily arrested him: “Today at 2 o'clock they rang the doorbell and said that we have a gas leak. When we went to the door, they attacked us. They entered with force, intimidation and threats of using tasers [stun guns] and firearms on us. They entered without a warrant and took Jadi away without any legal justification.” 

During the period of these protests Aryan Eqbal, another specialist in the field of technology and internet access, was detained and physically assaulted. The wife of detained expert Eqbal emphasized that her husband was not involved in any illegal activity that warrants his arrest. She told Shargh newspaper: “Aryan's only concern has always been people's right to have free access to the Internet. And that has not been limited only to his own country, but for the whole world. He has only voiced opposition to the Protection Bill and disruptions and limits imposed on his people's access to the Internet, nothing else.”

Many of these technologists and digital rights defenders arrested have expressed opposition to the draconian User Protection Bill. Amongst some of the most alarming elements of the Bill is the policy to block all foreign services that refuse to cooperate with authorities, as well as criminalizing and disabling the use of circumvention technology (such as VPNs) —two policies that have been defining the shape of internet restrictions during these protests. The Bill has been in the process of ratification in the Iranian parliament for over two years, however due to widespread domestic and international criticism and opposition, its policies have been quietly implemented

The policies and development originating from this suppressing Bill have facilitated disturbing new methods of internet disruptions during protests. These new methods include disabling the use of the internet through mobile network curfews as the majority of internet users rely on mobile internet data. Additionally, we have seen concerted and sophisticated attacks to disable VPNs—severing the last lifeline to blocked and foreign and secure internet services, including the widely-used Instagram and WhatsApp (recently blocked during these protests). 

Iran’s overarching digital repression during the ongoing nationwide protests is severe. The intense online internet censorship alongside partial and intermittent disruptions and shutdowns since September 16 are having extreme impacts on the free flow of information and documentation. These attacks on technologists are a frightening further escalation of repression in the Islamic Republic of Iran’s ongoing assault on human rights and eliminate any hopes for digital rights. 

We are deeply alarmed by the violent and unrestrained crackdown and the unlawful use of lethal force against protesters and bystanders who do not pose an imminent threat of death or serious injury across Iran, alongside the violent arrest and arbitrary detention of the digital rights and other human rights defenders  and ongoing internet restrictions. Since the outbreak of nation-wide protests three weeks ago, human rights groups have reported the killing of at least 201 protestors and bystanders, including at least 23 children. The death toll is believed to be higher. Iranian authorities have also arbitrarily arrested over seventy human rights defenders, in addition to at least 40 journalists and student activists, some already charged with “acting against national security.” The number of arrests is suspected to be in the thousands. 

This latest crackdown on technologists and digital rights defenders is a frightening sign that no voice or form of expression is being spared in this brutal crackdown.   

The government of Iran must immediately release detained technologists and all those arbitrarily arrested for exercising their human rights and put an end to this violent protest repression—both online and offline. The Iranian authorities must be independently and criminally investigated for committing—with full impunity—serious crimes under international law and other grave violations of human rights. 

Signatories

Access Now

ARTICLE19

Electronic Frontier Foundation (EFF) 

Front Line Defenders



Jillian C. York

The Internet Is Not Facebook: Why Infrastructure Providers Should Stay Out of Content Policing

1 month 3 weeks ago

Cloudflare’s recent headline-making decision to refuse its services to KiwiFarms—a site notorious for allowing its users to wage harassment campaigns against trans people—is likely to lead to more calls for infrastructure companies to police online speech. Although EFF would shed no tears at the loss of KiwiFarms (which is still online as of this writing), Cloudflare’s decision re-raises fundamental, and still unanswered, questions about the role of such companies in shaping who can, and cannot, speak online.

The deplatforming followed a campaign demanding that Cloudflare boot the site from its services. At first the company refused, but then, just 48 hours later, Cloudflare removed KiwiFarms from its services and issued a statement outlining their justifications for doing so.

While this recent incident serves as a particularly pointed example of the content-based interventions that infrastructure companies are increasingly making, it is hardly the first:

  • In 2017, GoDaddy, Google, and Cloudflare cut off services for the neo-Nazi site Daily Stormer after the site published a vitriolic article about Heather Heyer, the woman killed during the Charlottesville rally. Following the incident, Cloudflare CEO Matthew Prince famously stated: “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” 
  • In 2018, Cloudflare preemptively denied services to Switter, a decentralized platform by and for sex workers to safely connect with and vet clients. Cloudflare blamed the decision on the company’s “attempts to understand FOSTA,” the anti-trafficking law that has had wide repercussions for sex workers and online sexual content more generally. 
  • In 2020, as Covid lockdowns made in-person events largely untenable, Zoom refused to support virtual events at three different universities, ostensibly because one of the speakers—Leila Khaled—participated in airplane hijackings fifty years ago and is associated with an organization the U.S. government has labeled as “terrorist”. The company had previously canceled services for activists in China and the United States regarding the Tiananmen Square massacre commemorations, citing adherence to Chinese law
  • In 2022, during the early stages of Russia’s invasion of Ukraine, governments around the world pressured internet service providers to block state-sponsored content by Russian outlets, whilst Ukraine reached out to RIPE, one of the five Regional Registries for Europe, the Middle East and parts of Central Asia, asking the organization to revoke IP address delegation to Russia. 

These takedowns and demands raise thorny questions, particularly when providing services to one entity risks enabling harms to others. If it is not possible to intervene in a necessary and proportionate way, as required by international human rights standards, much less in a way that will be fully transparent to—or appealable by—users who rely on the internet to access information and organize, should providers voluntarily intervene at all? Should there be exceptions for emergency circumstances? How can we best identify and mitigate collateral damage, especially to less powerful communities? What happens when state actors demand similar interventions?

Spoiler: this post won’t answer all of those questions. But we noticed that many policymakers, at least, are trying to do so themselves without really understanding the variety of services that operate “beyond the platform.” And that, at least, is a problem we can address right now.

The Internet Is Not Facebook (or Twitter, or Discord, etc)

There are many services, mechanisms, and protocols that make up the internet as we know it. The most essential of these are what we call infrastructural, or infrastructure providers. We might think of infrastructural services as belonging to two camps: physical and logical. The physical infrastructure is the easiest to determine, such as underwater tubes, cables, servers, routers, internet exchange points (IXPs), and the like. These things make up the tangible backbone of the internet. It's easy to forget—and important to remember—that the internet is a physical thing.

The logical layer of internet infrastructure is where things get a little tricky. No one will contest that internet protocols (like HTTP/S, DNS, IP), internet service providers (ISPs), content delivery networks (CDNs), and certificate authorities (CAs) are all examples of necessary infrastructural services. ISPs provide people with access to the physical layer of the internet, internet protocols provide a coherent set of rules for their computers to communicate effectively across the internet, and CDN’s and CA’s provide the necessary content and validity that websites need in order to remain available to users. These are essential for platforms to exist and for people to interact with them online. This is why we advocate for content-neutrality positions from these services: they are essential to freedom of expression online and should not be empowered with editorial capability to decide what can and cannot exist online, above what the law already dictates.

There are plenty of other services that work behind the scenes to make the internet work as expected. These services, like payment processors, analytics plugins, behavioral tracking mechanisms, and some cybersecurity tools, provide platforms financial viability and reveal a sort of gradient gray area between what we determine as essentially infrastructural versus not. Denying their services may have varying degrees of impact on a platform. Payment processors are essential for almost any website to collect money for their business or organization to stay online. On the other hand, one could argue that behavioral tracking mechanisms and advertising trackers also provide companies financial viability in competitive markets. We won’t argue that tracking tools are infrastructural. 

But when it comes to cybersecurity tools like DDoS protection through reverse proxy servers (what Cloudflare provided to KiwiFarms), it’s not so easy. A DDoS protection mechanism doesn’t make or break a site from appearing online—it shields it from potential attacks that could. Also, unlike ISP’s or CA’s or protocols, this type of cybersecurity tool isn’t a service closely guarded and defined by authoritative entities. It is something that anyone with technical expertise (no platform is guaranteed a right to good programmers) can accomplish. In the case of KiwiFarms, they’ve transitioned to using a modified fork of a free and open source load balancer to protect against DDoS and other bot-driven attacks.

Interventions Beyond Platforms Have Different Consequences

It's hard for infrastructure providers to create policies that uphold the requirements for content moderation as established by international human rights standards. And it's particularly challenging to create these policies and monitoring systems when individual rights appear to conflict with one another. And the consequences of their decisions vary significantly.

For example, it’s notable that far less ink was spilled by Cloudflare and by the tech press when it made the decision to terminate service to Switter, in just one example of SESTA/FOSTA’s harmful consequences for sex workers.  Yet it's these types of sites that are impacted the most. Platforms that are based outside the global north or which have more users from marginalized communities seldom have the same alternatives for infrastructure services—including security tools and server space—as  well-resourced sites and even less-resourced online spaces based in the U.S. and Europe. For those users, policies that support less intervention, and the ability to communicate without being vulnerable to the whims of company executives, may be a better way to help people speak truth to power.

Online actions create real world harm—and that can happen in multiple directions. But infrastructure providers are rarely well-placed to evaluate that harm. They may also face conflicting requirements and demands based on the rules and values of the countries in which they operate. Cloudflare noted that previous interventions led to an increase in government takedown demands. 

We don’t have a simple solution to these complex problems, but we do have a suggestion. Given these pressures, the thorny questions they raise, and the importance of ensuring that users have the ability to speak up and express themselves without being vulnerable to the whims of company executives, providers that can’t answer those questions consistently should do their best to stay focused instead on their core mission: providing and improving reliable services so that others can build on them to debate, advocate, and organize. And policymakers should focus on helping ensure that internet policies support privacy, expression, and human rights.

 

Corynne McSherry

First Court in California Suppresses Evidence from Overbroad Geofence Warrant

1 month 3 weeks ago

A California trial court has held a geofence warrant issued to the San Francisco Police Department violated the Fourth Amendment and California’s landmark electronic communications privacy law, CalECPA. The court suppressed evidence stemming from the warrant, becoming the first court in California to do so. EFF filed an amicus brief early on in the case, arguing geofence warrants are unconstitutional.

The case is People v. Dawes and involved a 2018 burglary in a residential neighborhood. Private surveillance cameras recorded the burglary, but the suspects were difficult to identify from the footage. Police didn’t have a suspect so they turned to a surveillance tool we’ve written quite a bit about—a geofence warrant.

Unlike traditional warrants for electronic records, a geofence warrant doesn’t start with a suspect or even an account; instead police request data on every device in a given geographic area during a designated time period, regardless of whether the device owner has any link at all to the crime under investigation. Google has said that for each warrant, it must search its entire database of users’ location history information. Geofence warrants are problematic because they allow police access to individuals' sensitive location data that can reveal private information about people's lives. Police have also used geofence warrants during public protests, threatening protesters' free speech rights.

Google has created a three-step process for responding to geofence warrants. First, it provides police with a list of de-identified device IDs for all devices in the area. In the second step, police may narrow the devices in which they’re interested and expand the geographic area or time period to see where those devices came from before or went to after the time of the crime. Finally, in the third step, police further narrow the devices in which they’re interested, and Google provides police those device IDs and full user account information. In general, police only seek one warrant to cover the entire process, which allows the police significant discretion in determining which devices to target for further information from Google.

The data Google provides to police in response to a geofence warrant is very precise. It allows Google to infer where a user has been, what they were doing at the time, and the path they took to get there. Google can even determine a user’s elevation and establish what floor of a building that user may have been on. As another court noted in reviewing a geofence warrant last summer, “Location History appears to be the most sweeping, granular, and comprehensive tool—to a significant degree—when it comes to collecting and storing location data.”

However, in that same case, expert witnesses testified that, despite this claimed precision, Google’s data may not be all that accurate. It may place a device inside the geofenced area that was, in fact, hundreds of feet away and vice versa. This creates the possibility of both false positives and false negatives—people could be implicated for the robbery when they were nowhere near the bank, or the actual perpetrator might not show up at all in the data Google provides to police."

In Dawes, the court rejected the SFPD’s geofence warrant. The court held the defendant had a reasonable expectation of privacy in his locational data under CalECPA (the California Electronic Communications Privacy Act, which governs warrant requirements for state law enforcement accessing electronic information) and that the warrant did not satisfy the probable cause and particularity requirements of the Fourth Amendment. Although the court found the time period requested by SFPD—2.5 hours—was reasonable given the evidence, it held the warrant was overbroad because the size of the designated geographic area—which covered the burgled home and the entire street traveled by the suspect—was too large. The court stated: “[t]his deficiency in the warrant is critical because the geofence intruded upon a residential neighborhood and included, within the geofence, innocent people's 13 homes who were not suspected to have any involvement in the burglary, either as a suspect, victim or witness."

The court also held the warrant violated the Fourth Amendment because it failed to require police to come back to the court for a new warrant at each step of the process, instead providing officers with unbridled discretion to determine who to target for further investigation. Ultimately, the court suppressed the evidence under CalECPA.

The Dawes ruling is similar to those of several other courts outside California that have issued public opinions weighing in on geofence warrants and finding most to be unconstitutional. Those courts held the warrants were overbroad because police can’t establish probable cause to believe all Google users in an area are somehow linked to the crime under investigation. These courts have also held the three-step process provides officers with too much discretion, in violation of the Fourth Amendment.

However, the Dawes court was very clear that its ruling was narrow. It did not hold that “a geofence search warrant can never pass Fourth Amendment muster, rather, this specific geofence search warrant was not sufficiently particular and was overly broad.” While this is disappointing—EFF believes all geofence warrants, by their very nature, are unconstitutional general warrants—the ruling does place important limits on future police use of these warrants. Not only will San Francisco police now be required to ensure the scope of their warrants is extremely narrow, officers must go back to the court for a new warrant at each step of the geofence process. This is at least a step in the right direction.

Jennifer Lynch

Digital Rights Updates with EFFector 34.5

1 month 4 weeks ago

Want the latest news on your digital rights? Well, you're in luck! Version 34, issue 5 of our EFFector newsletter is out now. Catch up on the latest EFF news by reading our newsletter or listening to the audio version below. This issue covers EFF's current work, including our investigation into Fog Data Science, our guide to better privacy practices for nonprofit organizations, and our video and essay on how an interoperable Facebook of the future could work.

LISTEN ON YouTube

EFFECTOR 34.5 - How this data broker is selling mass surveillance to local police

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Copyright Trolls Target Users in Brazil, Threatening Due Process and Data Protection Rights. Civil Society Groups Are There to Help

1 month 4 weeks ago

Copyright trolls typically don’t produce or distribute content, but instead make money off of copyrighted material by using the threat of litigation to shake down people who allegedly download movies and other content over the internet—a business model that invites harassment and abuse. These entities operate in many countries around the world, and have recently cropped up in Brazil, with predatory practices that threaten the due process, privacy, and data protection rights of thousands of internet users.

Fortunately, civil society is on the case, raising awareness, providing guidance, and fighting back on behalf of Brazilians who are being targeted by this problematic business practice.

Brazil’s copyright trolls are using the same playbook as trolls EFF fights against in the U.S. It goes something like this: entities claiming to represent rightsholders such as movie studios find ways to obtain IP addresses that allegedly shared movies online using BitTorrent or other file-sharing systems. The trolls, usually a law firm or “rights management” company, amass large numbers of IP addresses, then file lawsuits to make courts require internet service providers (ISPs) to disclose personal identifying information associated with internet subscribers

Now armed with names and addresses, copyright trolls send notifications accusing those people of copyright infringement and demanding they pay a settlement to avoid costly lawsuits. Users who aren’t aware of their rights and don’t know how to defend themselves against such threats will pay—the troll profits and moves on to the next set of victims.

There’s a lot wrong with this scheme. Copyright trolls weaponize the courts, filing lawsuits with little evidence and little regard for accuracy. Some name dozens of defendants in a single case, making individual justice all but impossible. They typically target broadband subscribers, who may not have downloaded the copyrighted material. And, as at least one egregious example in Brazil shows, they can lead to the exposure of thousands of people's personal information in violation of privacy and data protection safeguards.

Law firms in Brazil representing U.S. film studios have in recent years targeted thousands of people for alleged use of BitTorrent to download movies online. These people receive notices claiming they have infringed Brazil’s copyright law and offering them a way out: pay the law firm a few thousand reals (about $400) to avoid being sued and racking up hefty legal fees. One firm reported sending more than 60,000 notices. Even if only 15% of those notified decided to pay, the troll would still make millions.

Like trolls elsewhere, trolls in Brazil usually get access to users’ identification, like names and addresses, by obtaining court orders requiring telecommunications companies to disclose subscriber information associated with IP addresses. Regulators and courts often consider subscriber information less sensitive than other categories of data based on the problematic assumption that it doesn’t reveal intimate details of people’s lives and habits. But we know that subscriber information is exactly the kind of information needed to identify users’ internet browsing and communications. Assessing a request for data should not be any less rigorous just because it seeks subscriber data, and should be made on the basis of whether it's necessary and proportionate.

In late 2020, Telecom operator Claro, one of the major ISPs in Brazil, turned over the personal information of more than 70,000 subscribers to the law firm Kasznar Leonardo Advogados. The firm, representing a UK company that was tracking non-authorized copies of three films for American production company Millennium Media, had filed a lawsuit, and Claro disclosed the data after receiving a court order.

Shockingly, the data was available on spreadsheets hosted on a Google drive with unrestricted access. Anyone with credentials to access the court’s electronic system could view the spreadsheets. More than 70,300 entries containing names and addresses of customers from all over Brazil, as well as the details about when and how the movies were downloaded, were exposed. Civil society groups following the case, like Creative Commons Brasil, Coalizão Direitos na Rede, IDEC, and Partido Pirata, pointed out that, because the personal information was disclosed in this manner, it would be complicated to assert that any notices sent to users were coming from Kasznar Leonardo Advogados.

Using the courts to make ISPs turn over subscriber data for the purpose of sending demand letters to tens of thousands of people is highly problematic. Fortunately, some courts and ISPs are taking a stand against copyright trolls. The EU Court of Justice ruled in 2021 that courts must reject a request for information based on copyright infringements if it is unjustified or disproportionate. In 2018, two Danish ISPs won a lawsuit they brought to prevent the identities of their subscribers from being handed over to copyright trolls. Meanwhile, in Brazil, UP Tecnologia, a small ISP, has also taken a stand by notifying users about requests for their data from a company representing U.S. film studios. This gives users the opportunity to defend themselves and take steps to protect their rights.

But more often than not, ISPs turn over subscriber data without a fight.

Brazilian users enjoy certain legal protections against these abuses, and Brazilian courts and ISPs should take them into account. Copyright trolls cite Brazil’s Copyright Law and Article 184 of the country’s penal code in their shakedown letters to scare users and lure them into paying settlements. But Brazilian courts have adopted a restricted interpretation of when copyright infringement is a crime. Criminal prosecutions mainly target people seeking to profit from copyright infringement by selling protected content, not people downloading movies for personal use or sharing for free.

Brazil also has robust data protection safeguards, thanks to the country’s comprehensive data protection law. Courts should abide by its principles when assessing personal data requests, and ISPs must comply with its rules when disclosing user data, including by adopting proper security measures.

The legal framework is there to make it less comfortable for copyright trolls to abusively extract copyright settlements in Brazil. The courts and ISPs should do their part to protect users from their predatory practices.  Meanwhile, a group of lawyers was formed to provide free legal assistance to individuals in Brazil who may be sued after receiving a demand letter. The group can be contacted at copyrighttrolls@partidopirata.org

Karen Gullo

Derechos Digitales Raises the Bar for Chilean ISPs' Privacy Commitments in New Report

1 month 4 weeks ago

Chile’s internet service providers (ISPs) have over the last five years improved transparency about how they protect their users’ data, thanks in large part to Latin American digital rights group Derechos Digitales shining a light on their practices through annual ¿Quien Defiende Tus Datos? (Who Defends Your Data?) reports.

Better transparency about when and how ISPs turn data over to the government is a win for Chile’s mobile and internet users, but increased state surveillance demands an even greater commitment to privacy. In Derechos Digitales’ new 2022 ¿Quien Defiende Tus Datos? report,, Chile’s six top telecom providers are assessed against new, tougher criteria that look at their practices amid increased concerns over state surveillance related to social protests in 2019 and the COVID 19 pandemic. 

There’s plenty of good news in the report. Even with stricter criteria, Claro, WOM, and VTR received higher scores compared to last year, with Claro earning full credit in all categories and WOM earning full credit in three out of five categories. Another highlight: all companies evaluated received at least partial credit in all categories except for  user notification—an improvement over 2021 results. Nonetheless, user notification remains a challenging category.  Entel, GDT Manquehue, Movistar and VTR failed to take concrete steps to enable a notification system to their users. While many of them reserve the possibility or right to notify users in their policies, they didn't take more concrete actions or commitments in that direction. As such, Derechos Digitales didn't give them credit in that category.

Companies Must Do More With State Surveillance On the Rise

Chile’s telecom companies have met many of the challenges imposed by ¿Quien Defiende Tus Datos?  annual assessments, which started in 2017, and implemented best practices in most categories covered by the reports. Certain transparency practices that once seemed unusual in Latin America have become the default among ISPs in Chile. For example, both transparency reports and law enforcement guidelines have become an industry norm among Chile’s main ISPs.

But companies need to be doing more to protect user data. The new criteria raises the bar on best practices, taking into account new privacy challenges and the incredibly magnified role digital technologies play in our lives compared to 1999, when Chile enacted its existing data protection law (Law No 19,628 of 1999). Transparency and data privacy protections must go beyond what was required 23 years ago.

Report Highlights:

Derechos Digitales set out to raise the standard of evaluations conducted in 2021. In this fifth edition, the report sought to answer the following questions:

  • Do ISPs’ contractual clauses and privacy policy provisions reflect a company's commitment to respecting and protecting users' rights? 

New in this category are requirements that ISPs disclose instances where third parties process user data and which protection measures they adopt in such cases. New requirements also check if companies detail how they use and store user data, including if it is shared or processed abroad. Finally, companies must commit to notify users about changes in their policies and make previous versions available to the public. Claro received full credit in this category, and the other five ISP’s received 75 percent credit. 

  • Do companies have an updated transparency report that provides quality information? 

To receive a full star, ISPs’ transparency reports must include more information than previously required, breaking down requests originating from court orders, requests that refer to a particular individual, and massive requests that refer to an undetermined group of people (in general, asking for information about all cellular telephones connected to a given antenna during a given period of time), among other new criteria.

Claro and WOM received full stars for their transparency reports (available here and here, respectively). Claro’s reports stand out for providing greater detail on the reasons for rejected requests, broken down by interception and user information/metadata requests. In the first quarter of 2022, rejection in most interception cases occurred because of errors in the requests. As for other user information demands, over half of refusals happened because the police request failed to copy the prosecutor in charge of the investigation, while in 19% of cases, requests came without a judicial order.

Claro, VTR, and WOM transparency reports also included information about requests seeking data about a large number of undetermined users, such as those  whose mobile phones randomly connected to a cell tower. VTR reported receiving no requests of this nature between July 2021 and June 2022. Claro reported only one request during the first quarter of 2022. In turn, WOM reported receiving 429 cell tower data requests during 2021. Although the time periods differ, the discrepancy in numbers is striking. Additional data could help users understand the variation, considering that often law enforcement authorities don't pick just one ISP to send cell tower data requests, but reach out to all relevant telcos with towers in a given geographical area of interest.

  • Do ISPs notify their users about requests for access to their personal information by the authority or, at least, have made concrete efforts to do so? 

To earn credit this year, companies must set up a notification procedure or make concrete and verifiable efforts to put them in place.

WOM was the only ISP to earn credit (75 percent) in this category besides Claro, which received full credit. WOM disclosed a statement about its efforts in 2019 and 2020 to work with authorities to establish a user notification mechanism in criminal cases (the efforts were included in the 2020 report so they didn’t count in the new report). New in the 2021 report is WOM’s commitment to notify users, as of January, about information requests in civil, labor, and family cases. Claro was the first ISP to abide by this commitment, which EFF highlighted in Chile’s 2019 report.

Derechos Digitales' report notes Claro’s efforts in 2019, 2020, and 2022 advocating for user notification, including carrying out actions both in Congress and before the Public Prosecutor's Office demonstrating its concern for finding a way to put in place a notification procedure that adheres to the notice right enshrined in Article 224 of the Code of Criminal Procedure. A particular highlight: in May, Claro  urged the Public Prosecutor's Office to again consider notifying people subject to interception or personal data requests, emphasizing that the possibility of implementing a pilot plan has been raised to the Prosecutor's Office.

  • Do ISPs have a public guide about law enforcement requests for user data that specifies the procedure, requirements, and legal obligations that must be fulfilled?  

Companies now must make explicit the obligation to notify users affected by an intrusive investigative measure, according to Article 224 of Chile’s Criminal Procedure Code. ISPs must also state that requests for user data involving sensitive information, like location data, must refer to specific individuals and have a previous judicial order. If requests relate to the development of public policies, ISPs must commit to hand over only anonymized and aggregate data to the competent authority. Claro and WOM received full credit in this category; the other four ISPs received 75 percent credit.

  • Have ISPs actively defended privacy and protected users' data, either publicly, in judicial or administrative proceedings, or in a legislative discussion in Congress?

Examples of opportunities in which companies could have spoken out include the progress of bills promoting public surveillance and expanding the conditions under which intrusive investigative measures could be taken, and state spying cases, such as the tapping of Chilean journalist Mauricio Weibel’s  phone.

Claro stands out again in this category. The report notes that Claro reached out to Chile’s Senator Jorge Pizarro, expressing concern about a bill to modify Chile’s data protection law. Specifically, Claro expressed concern about information requests from public bodies and suggested that standards and controls for personal data protection that apply to companies should also apply to the State Administration. Further, it suggested establishing preventive controls and having compliance officers for each public service.

Claro, GTD Manquehue and WOM scored in this category for challenging massive user requests for user information from Subtel, Chile’s telecommunications regulatory agency. According to examples provided in the report, Subtel sought the information to carry out research related to the use of roaming services and conduct a satisfaction survey with broadband users. On the latter, the companies argued that the sample of users Subtel required for the survey failed to consider the principle of proportionality.

Karen Gullo

A National Lab Is Promoting a "Digital Police Officer" Fantasy for Law Enforcement and Border Control

1 month 4 weeks ago

Researchers at a national laboratory are forecasting a future where police and border agents are assisted by artificial intelligence, not as a software tool but as an autonomous partner capable of taking the steering wheel during pursuits and scouring social media to target people for closer investigation. The "Digital Police Officer" or "D-PO" is presented as a visionary concept, but the proposal reads like a pitch for the most dystopian buddy cop movie ever.

The research team is based out of Pacific Northwest National Laboratory (PNNL), a facility managed by the corporation Battelle on behalf of the U.S. Department of Energy. They have commissioned concept art and published articles in magazines aimed at law enforcement leaders, EFF has learned through a review of materials, including records obtained through a Freedom of Information Act request.

"To leverage the full power of artificial intelligence, we need to know how people can best interact with it," they write in a slide deck that starts with a robot hand and a human hand drawing each other in the style of the famous M.C. Escher artwork. "We need to design computing systems that are not simply tools we use, but teammates that we work alongside."

For years, civil liberties groups have warned about the threats emerging from increased reliance by law enforcement on automated technologies, such as face recognition and "predictive policing" systems. In recent years, we've also called attention to the problems inherent in autonomous police robots, such as the pickle-shaped Knightscope security patrol robots and the quadrupedal "dog" robots that U.S. Department of Homeland Security wants to deploy along the U.S.-Mexico border

The PNNL team's vision for "human-machine teaming" goes so much further.

"Al plays an active role in the mission by learning from the human and its environment," the researchers write in a slide defining the term. "It uses this knowledge to help guide the team without requiring specific direction from the human."

The Digital Police Officer

In articles published in Police Chief, the official magazine of the International Association of Chiefs of Police, and Domestic Preparedness Journal, the researchers introduce a fictional duo named Officer Miller and her electronic sidekick, D-PO (an apparent play on C-3PO), who've been patrolling the streets together for five years.

Here's what they would look like, according to concept art commissioned by PNNL:

(Miller is technically a paramedic in this image, but this was used to illustrate the police officer narrative in both publications.)

And here's another piece of PNNL art from a presentation EFF received in response to a FOIA request:

PNNL's fictional narrative begins with D-PO keeping tabs on the various neighborhoods on their beat and feeding summaries of activities to Officer Miller, as they do everyday. Then they get an alert of a robbery in progress. The PNNL researchers imagine a kitchen sink technological response, tapping drones, face recognition, self-driving vehicle technology, and algorithmic prediction: 

While Officer Miller drives to the site of the robbery, D-PO monitors camera footage from an autonomous police drone circling the scene of the crime. Next, D-PO uses its deep learning image recognition to detect an individual matching the suspect’s description. D-PO reports to Officer Miller that it has a high-confidence match and requests to take over driving so the officer can study the video footage. The officer accepts the request, and D-PO shares the video footage of the possible suspect on the patrol car’s display. D-PO has highlighted the features on the video and explains the features that led to its high-confidence rating.

“Do you want to attempt to apprehend this person?” D-PO asks.

Obviously Officer Miller does. 

As they drive to the scene, the officer talks to D-PO the way she would with a human partner: “What are my best options for apprehending this guy?” Officer Miller asks.

D-PO processes the question along with the context of the situation. It knows that by “this guy” the officer is referring to the possible suspect. D-PO quickly tells Officer Miller about three options for apprehending the suspect including a risk assessment for each one…

D-PO’s brief auditory description is not enough for the officer to make a decision. Despite Officer Miller’s usual preference to drive, she needs her digital partner to take the wheel while she studies the various options.

“Take over,” she tells D-PO.

All this action sequence is missing is Officer Miller telling D-PO to blast Mötley Crüe's "Kickstart my Heart."

The authors leave the reader to conclude what happens next. If you buy into the fantasy, you might imagine this narrative ending in a perfect apprehension, where no one is hurt and everyone receives a medal–even the digital teammate. But for those who examine the intersection of policing and technology, there are a wide number of tragic endings, from mistaken identity that gets an innocent person pulled into the criminal justice system to a preventable police shooting–one that ends in zero accountability, because Officer Miller is able to blame an un-punishable algorithm for making a faulty recommendation.

EFF filed a Freedom of Information Act request this spring with PNNL to learn more about this program, how far along it is, and whether any local law enforcement have expressed interest in it.

The good news is that in the emails we obtained, one of the authors acknowledges in internal emails that elements like a D-PO taking over driving is a "long way off" and monitoring live drone feeds is "not a near-term capability." Only one agency wrote to the PNNL email address included at the end of the Police Chief magazine article: the Alliance Police Department in Nebraska (pop. 8,150).

"We are implementing an artificial intelligence program to include cameras around the city, ALPR's and drones," Chief Philip Lunkens wrote. "Any way we can work together or try things I am very open to [sic]. Please let me know your thoughts and how I can help. Thanks for what you are doing."

The bad news is that the FOIA documents also include a concept for how this technology could be combined with augmented reality for policing U.S. borders–and that might be a lot closer to realization. 

The Border Inspections Teammate System

The PNNL researchers' slides include a section designed specifically to entice Customs & Border Protection to integrate similar technologies into its process for screening vehicles at ports of entry.

CBP is infamous for investing in experimental technologies in the name of border security, from surveillance blimps to autonomous surveillance towers. In the PNNL scenario, the Border Inspections Teammate System (BITS) would be a self-directed artificial intelligence that communicates with checkpoint inspectors via an augmented reality (AR) headset.

(PNNL did not respond to our request for a more legible scan of the slide.)

This concept is also produced as a tech thriller narrative. A couple of CBP officers have stopped a truck at the border. While the officers inspect the vehicle and grill the driver, BITS busily combs through an array of databases "maintained by several agencies involved in interstate commerce, homeland security, federal and state commercial truck enforcement and others." BITS is also scanning through video recorded at weigh stations and analyzes traffic and weather data along the truck's route. BITS concludes that the driver may be lying about his route and recommends a deeper level of scrutiny.

Of course, the border agents accept the recommendation. They break out advanced, hand-held scanners to probe the vehicle, while BITS compares the scans in real-time against data collected from thousands of other previous scans. BITS tells the officers that the driver is carrying crates that look similar to other crates containing blister packs of narcotics.

Finally, BITS scans the driver's online presence and determines, "the driver's social media activity shows a link to other suspects of similar activity."

The scenario concludes with the CBP officers detaining the driver. The researchers again leave the conclusion open-ended. Maybe they find illegal narcotics in the back of the truck–or maybe it was all computer error and the driver loses his job, because his perishable freight didn't arrive on time.

The records EFF received do not indicate any official interest from CBP or the Department of Homeland Security. However, BITS may not be as far off in the future as the D-PO. CBP has been experimenting with AR since at least 2018 by using HoloLens headsets to inspect goods for intellectual property violations.

Meanwhile, an "artificial intelligence expert" at San Diego State University is developing a technology that sounds similar to (if not more alarming than) BITS. The project contemplates "helping DHS 'see' terrorists at the border" with HoloLens headsets that "would add custom-built algorithms to place everything a border agent needs to know in a line of sight for faster, more thorough operations."

This system builds off the SDSU expert's earlier DHS-funded project: a kiosk-based system called the "Automated Virtual Agent for Truth Assessments in Real Time" (AVATAR) that was tested around 2011 at border crossings.  According to an SDSU promotional article, AVATAR was "designed initially for border and airport security" and researchers claimed it could "tell if the person being interviewed might be providing deceptive answers based on information transmitted via behavioral sensors in the kiosk," such as facial expressions and voice inflections. All of these technologies have the potential for grave error as well as racial bias.

As dazzling as this technology might be to officials working in the highly politicized realm of border security, it sets off a flashing red alarm for civil liberties advocates who have been long tracking the abuse and violations of the rights of travelers at ports of entry.

No More Tech Fantasies

One of the problems with modern policing is the adoption of unproven technologies, often based on miraculous but implausible narratives promoted by tech developers and marketers, without contemplating the damage they might cause.

The PNNL researchers present D-PO as a solution that they proudly acknowledge sounds like it was pulled from "a science fiction novel." But what they fail to remember is that science fiction is a cautionary genre, one designed to help readers–and the world–imagine the worst case scenarios. Hal murdered the crew in 2001: A Space Odyssey. The precogs of Minority Report made mistakes. The Terminator's Skynet nearly wiped out the human race.

Society would be better served if the PNNL team used their collective imagination to explore the dangers of new policing technologies so we can avoid the pitfalls, not jetpack right into them.

Dave Maass

Court’s Decision Upholding Disastrous Texas Social Media Law Puts The State, Rather Than Internet Users, in Control of Everyone’s Speech Online

2 months ago

The First Amendment and the freedom of speech and expression it provides has helped make the internet what it is today: a place for diverse communities, support networks, and forums of all stripes to share information and connect people. Individuals and groups exercise their constitutional right to host and moderate sites that offer a common place for people who share a hobby, a religious belief, a political opinion, or a love for a particular kind of music.

Online platforms, from Facebook to your blog, have the right to decide what speech they publish and how they publish it. In that way, online platforms are no different from newspapers or parade organizers.

A federal appeals court in Louisiana, ruling last month in the case Netchoice v. Paxton, dealt a staggering blow to this bedrock principle of free speech online. The U.S. Court of Appeals for the Fifth Circuit upheld an unconstitutional and disastrous Texas law that creates liability for social media platforms’ moderation decisions, essentially requiring that they distribute speech they do not want to host. Texas HB 20 restricts large platforms from removing or moderating content based on the viewpoint of the user. The law was created and passed to retaliate against social platforms that allegedly “silence conservative viewpoints and ideas,” despite there being no evidence that large platforms’ moderation decisions are biased against conservative viewpoints.

Tech industry groups NetChoice and the Computer and Communications Industry Association (CCIA) challenged the law in court. EFF filed amicus briefs in the district, federal appeals, and Supreme Court arguing that while internet users are sometimes justifiably frustrated by social media platforms’ content moderation decisions, they nevertheless are best served when the First Amendment protects those decisions. That First Amendment right helps the internet grow and provide diverse forums for speech.

After a district court preliminarily blocked the law, Texas appealed to the Fifth Circuit, which found that HB 20 doesn’t violate platforms’ First Amendment rights. The court ruled that services do not have a constitutional right to engage in content moderation—instead, the court called platforms’ moderation and curation of content on their sites  “censorship.” Large platforms that want to moderate user speech in violation of HB 20 have “an armada of attorneys” to defend them in court, the Fifth Circuit said. The law allows individuals and the state attorney general to sue platforms over content moderation  and get reimbursed for their attorney’s fees if they win.

This is an extraordinarily dangerous turn for internet freedom, and the right of people with diverse opinions—that may be unpopular or aggravating to others—to speak freely online. The Fifth Circuit’s ruling is deeply problematic on many levels, including its failure to recognize how Congress, in enacting 47 U.S.C. § 230, has already preempted state censorship laws like HB 20. This post, however, focuses on the terrible implications that the ruling has for online speech.

The logic of the Fifth Circuit’s ruling has damaging implications for every service hosting user-generated speech, not just the largest platforms like Facebook and YouTube. While HB 20 only applies to platforms with more than 50 million users, the court’s holding that the First Amendment does not protect online content moderation can easily be applied beyond them. In the Fifth Circuit, which covers Louisiana, Mississippi, and Texas, this unprecedented scaling back of free speech endangers smaller, less powerful, and less wealthy services. Many small and medium sized online services, described in our amicus briefs against HB 20, moderate content to serve particular communities, topics, or viewpoints.

The effects cannot be overstated—HB 20 and laws like it will destroy many online communities that rely on moderation and curation and cannot afford to fight the onslaught of lawsuits that the Fifth Circuit invites. Platforms and users may not want to see certain kinds of content and speech that is legal but still offensive or irrelevant to them. Rejecting such content or even deprioritizing it in a feed would come with a ruinously high price tag.

For example, the Fifth Circuit’s holding could allow laws that require sites supporting people suffering from chronic fatigue syndrome to post comments from people who don’t believe this ailment is a real disease. Sites promoting open carry gun rights that disallow comments critical of gun rights would violate such laws. A site dedicated to remembering locals whose families were affected by the Holocaust could be forced to allow comments by Holocaust deniers. Platforms unable to withstand an attack of harassing comments from trolls could be forced offline altogether.

The Fifth Circuit’s decision allows concerns about private censorship to serve as the basis for government control of speech. Whatever your political views, we hope you recognize the danger of the Fifth Circuit’s decision, because it fundamentally alters our ability to decide for ourselves the types of speech and views we want to see and associate with, including our right to exclude others or ourselves from speech we don’t like. Community-led and diverse forums dedicated to particular topics and for particular people with specific views—which is nearly all forums—are now potentially under the thumb of the state, which could force them to serve its interests by calling the removal of opposing views “censorship.”

There’s something for everyone on the internet, and that’s how it should be. Of course, it’s true that moderation decisions by large platforms can silence legitimate speech and stifle debate online. But as EFF has repeatedly argued, the way to address the concentration of a handful of large services is by reducing their power and giving consumers more choices. This includes renewed antitrust reforms, allowing interoperability, and taking other steps to increase competition between services.

These efforts would allow people who don’t like the viewpoints expressed on one site to move to another and keep their social networks, while increasing the number of platforms  that host speech that reflects their views and interests.

Unfortunately, the Fifth Circuit’s decision is likely to result in fewer sites for users to choose from and will likely do very little to alter or diminish the dominance of the platforms. This is because, as the Fifth Circuit observes, the largest services have immense legal resources to fight the lawsuits permitted by HB 20. They will survive, while other smaller sites targeted by new laws similar to HB 20 will not.

Government should not have the power to tell websites what opinions they must host, and we hope to that the Supreme Court will strike down this disastrous law and reject the Fifth Circuit’s dangerous logic that undermines the First Amendment rights of online services and their users.

 

Karen Gullo

Snowflake Makes It Easy For Anyone to Fight Censorship

2 months ago

Tor, the onion router, remains one of the most effective censorship circumvention technologies. Millions of people use the Tor network every day to access the internet without fear of surveillance and censorship.

Most people get on the Tor network by downloading the Tor Browser and connecting to a relay. But some countries, such as Iran and Russia, block direct access to the Tor network. In those countries people have to use what are known as “Tor Bridges” to circumvent national firewalls. Tens of thousands of people use bridges regularly to circumvent censorship and national or regional restrictions. 

The number of bridge users in Iran grew exponentially in the last week of September 2022.

Of course, ISPs in countries where Tor is banned are constantly trying to find the IP addresses of bridges and block them to prevent people from accessing Tor. Bridge connections can also be identified (or “fingerprinted”) as connections to the Tor network by an ISP using deep packet inspection. To deal with this, Tor has a clever solution called “pluggable transports.” Pluggable transports disguise your Tor connection as ordinary traffic to a well-known web service such as Google or Skype, and smuggles your Tor connection inside of the seemingly innocuous traffic. 

In the past, running a pluggable transport was difficult to set up, requiring a server and a good deal of time and technical knowledge. Now, thanks to a new pluggable transport called “Snowflake,” anyone can run a pluggable transport in their browser with just a couple of clicks and help people all over the world access the unrestricted internet.

If you are ready to get started you can install the Snowflake browser add on, or if you run a server you can run the standalone version written in Go.

The user interface for the Snowflake browser extension

Logs from a standalone snowflake instance running on a server

How Snowflake Works

Snowflake is composed of three components: volunteers running Snowflake proxies, Tor users (or clients) that want to connect to the internet, and a broker that delivers Snowflake proxies to clients. Volunteers willing to help users on censored networks can help by spinning up short-lived proxies on their regular browsers. When you enable Snowflake, your browser will contact the broker and let it know that you are ready to accept peer-to-peer connections from people seeking to access Tor. Then clients who are on a restricted network can contact the broker and ask for a proxy, the broker will eventually hand them your IP address, and then the client will make a direct connection to your computer using WebRTC (the same technology which is used by Zoom, Skype, and any other peer-to-peer web connection.) Your computer will then forward traffic from the client to the Tor network. 

A visual diagram of Snowflake

The obvious weak point here is the broker server. Why couldn’t a country just block the broker IP since it is well-known? The answer is a technique called “domain fronting.” The details of domain fronting can be found elsewhere, but in brief, domain fronting lets the client make a request that looks like an ordinary web request for google.com, and thanks to HTTPS the request is able to hide its “Host” header which is actually for an arbitrary web service hosted on Google’s cloud. In this case, that service is the Snowflake broker. 

To block Snowflake, a network or country would have to block all of Google or every IP address outside of the network, essentially a complete internet shutdown. Of course, countries have repeatedly shown their willingness to do exactly that, but it’s a much higher price to pay than simply blocking Tor. 

The security concerns for the Snowflake proxy operator are minimal. The Snowflake client will not be able to interact with your computer in any way or observe your network traffic, and you will not be able to see their traffic. From the perspective of your ISP it will look like you are connecting to a Tor bridge, which if you are running a Snowflake proxy should be legal and unrestricted in your country. There is no more risk running a Snowflake proxy than running Tor browser. 

Snowflake means that everyone can help people exercise their freedom of expression anywhere in the world, and it takes no technical knowledge to run, so if you are in an unrestricted country (such as in North America or most of Europe) go run one now! And if you are in a restricted network consider using Snowflake to circumvent censorship and access the internet. 

More technical readers are encouraged to read the Snowflake Technical Overview and the project page for more technical details. For other discussions about Snowflake, please visit the Tor Forum and follow up the Snowflake tag.



Cooper Quintin

New Federal and State Court Rulings Show Courts are Divided on the Scope of Cell Phone Searches Post-Riley

2 months ago

This blog post was co-written with EFF Legal Intern Allie Schiele

There is no dispute that cell phones contain a lot of personal information. The Supreme Court recognized in 2014 in Riley v. California that a cell phone is “not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans ‘the privacies of life’.” For this reason, the Court held that the police generally need a warrant to search one. But what happens when police do get a warrant? Can they look at everything on your phone?

Well, it depends.

Riley didn’t articulate any standards that limit the scope of cell phone searches, and courts are taking different approaches. While some courts have constrained police searches to certain types of data on the phone, specific time periods, or limited the use of the data, other courts have authorized warrants that allow the police to search the entire phone. 

In August, two courts issued significant decisions that illustrate this divide—United States v. Morton from the federal Fifth Circuit Court of Appeals sitting en banc (with the full court), and Richardson v. State from the Maryland Court of Appeals (Maryland state’s highest court). EFF filed an amicus brief in Morton.

Maryland Sets Limits on Cell Phone Searches

In Richardson v. State, the Maryland Court of Appeals recognized that “the privacy concerns implicated by cell phone storage capacity and the pervasiveness of cell phones in daily life do not fade away when police obtain warrants to search cell phones.”

In this case, Richardson was involved in a fight at a local high school. After a school resource officer broke up the fight, the officer grabbed Richardson’s backpack and discovered three cellphones, a handgun, and Richardson’s school ID. Police determined one of the phones belonged to Richardson and got a warrant to search it.

The warrant was extremely broad and authorized a search for “[a]ll information, text messages, emails, phone calls (incoming and outgoing), pictures, videos, cellular site locations for phone calls, data and/or applications, geo-tagging metadata, contacts emails, voicemails, oral and/or written communication and any other data stored or maintained inside of [the phone].”

The search of the cell phone revealed messages between Richardson and a friend that detailed the planning of a robbery. After being charged, Richardson moved to suppress the information obtained from the phone, arguing the warrant was a general warrant because it authorized a search for “any and all information” and “any and all data.” The trial court denied the suppression motion, and the intermediate appellate court affirmed the denial.

The Court of Appeals reversed this, finding the warrant was impermissibly broad and therefore violated the Fourth Amendment. Because cell phones can contain vast amounts of data, the court held that officers rarely, if ever, can demonstrate probable cause to search everything on a phone, like they attempted in this case. 

The court recognized there is no “one size fits all” solution for cell phone warrants, but held the officers requesting the warrant and the judge issuing it “must think about how to effectively limit the discretion of the searching officers so as not to intrude on the phone owner’s privacy interests any more than reasonably necessary.” Effective tools include temporal restrictions, limitations on the apps to be searched, or specific search protocols that agents would be directed to follow. The Court of Appeals concluded that “a search warrant for a cell phone must be specific enough so that the officers will only search for the items that are related to the probable cause that justifies the search in the first place.” Ultimately, the court did not suppress the evidence against Mr. Richardson because it found the officers relied on the warrant in good faith.

The Fifth Circuit Upholds an Overbroad Cell Phone Search

In United States v. Morton, the full Fifth Circuit declined to weigh in on a similarly broad search, finding officers relied in good faith on the warrant. In doing so, the court overturned a panel opinion from the same court in the same case that had rejected the warrant, finding officers were only entitled to search specific areas on the phone. EFF filed an amicus brief in the case when it was before the en banc court.

In this case, state troopers arrested defendant Brian Morton during a traffic stop after officers discovered evidence of drug possession. When searching Morton’s car post-arrest, officers seized three cell phones and applied for a warrant to search the phones. Although the evidence found on Mr. Morton at the time of arrest only supported a charge for simple drug possession, officers alleged drug trafficking in their warrant application, a much more serious crime, and sought nearly unlimited access to the data on Morton’s phone. The judge issued the warrant. While executing this search, officers looked through photos on the phone and found child pornography. This led to a second warrant to further search the phones.

Morton challenged the initial warrant, arguing that it was not supported by probable cause because there is no reason to believe officers would need to search a cell phone to find evidence of simple drug possession. The original Fifth Circuit panel partially agreed with Mr. Morton and held that the affidavits supporting the warrant successfully established probable cause to search the phone’s contacts, call logs, and texts—but not the pictures. Police did not establish that the pictures would contain evidence relevant to the drug crime officers were investigating. Further, the Fifth Circuit panel found that the good faith exception was not applicable, as the officers should have understood that “searching the digital images on Morton’s phone—allegedly for drug-trafficking-related evidence—was unsupported by probable cause.”

The Fifth Circuit’s en banc panel rejected this conclusion. Instead of evaluating whether the police had probable cause to search the entirety of the phone’s contents, the en banc panel evaluated the case under the “good faith exception,” which states that “evidence should not be suppressed when law enforcement obtained it in good-faith reliance on a warrant.” Although there is an exception to this rule for affidavits that are so “bare-bones” that the statements included are conclusory and lack probable cause, the Fifth Circuit found that this exception did not apply. Because the good faith rule applied in this case, the court did not rule on whether there was enough probable cause to support the search of the entire phone.

The dissenting judges disagreed, rebuking the majority for not analyzing the cell phone warrant for probable cause. As the judges argued, “[s]earching a cellphone is much more invasive than a self-contained search of a pocket, compartment, or bag.” Because the affidavit was supported by “sweeping generalizations”—and therefore was a bare bones affidavit—the good faith exception did not apply. The dissenting judges concluded that an officer can now “take refuge in the majority’s holding that he is protected by the good faith exception. This is unjust, unfair, and unconstitutional.”

Decisions like Morton are a setback to the privacy protections for cell phones recognized in Riley. Cell phones contain deeply personal information that should be afforded strong protections by the Fourth Amendment, as recognized in Richardson. As we argued in our Morton amicus brief: “the scope of cell phone searches must closely adhere to the probable cause showing, lest authority to search a device for evidence of one crime mutate into authority to search the entirety of the device for any crime.” Courts should not allow law enforcement to have limitless authority in executing search warrants on cell phones. Instead, courts should follow the approach of the Maryland Court of Appeals—and numerous other courts—and require cell phone warrants that are narrowly tailored to the crime under investigation.

Jennifer Lynch

California Leads on Reproductive and Trans Health Data Privacy

2 months ago

In the wake of the Supreme Court’s Dobbs decision, anti-choice sheriffs and bounty hunters will try to investigate and punish abortion seekers based on their internet browsing, private messaging, and phone app location data. We can expect similar tactics from state officials who claim that parents who allow their transgender youth to receive gender-affirming health care should be investigated for child abuse.

So it is great news that California Gov. Gavin Newsom just signed three bills that will help meet these threats: A.B. 1242, authored by Asm. Rebecca Bauer-Kahan; A.B. 2091, authored by Asm. Mia Bonta; and S.B. 107, authored by Sen. Scott Wiener. EFF supported all three bills.

This post summarizes the new California data privacy safeguards and provides a breakdown of the specific places where they change California state law. For those interested, we have included the citations to these changes. These three new laws limit how California courts, government agencies, health care providers, and businesses handle this data. Some provisions create new exemptions from existing disclosure mandates; others create new limits on disclosure.

EFF encourages other states to consider passing similar bills adapted to their own state civil and criminal laws.

New Reproductive and Trans Health Data Exemptions from Old Disclosure Mandates

Law enforcement agencies and private litigants often seek evidence located in other states. In response, many states have enacted various laws that require in-state entities to share data with out-of-state entities. Now that anti-choice states are criminalizing more and more abortions, pro-choice states should create abortion exceptions from these sharing mandates. Likewise, now that anti-trans states are claiming that gender-affirming care for trans youth is child abuse, pro-trans states should create trans health care exceptions from these sharing mandates. California’s new laws do this in three ways.

First, an existing California law provides that California-based providers of electronic communication and remote computing services, upon receipt of an out-of-state warrant, must treat it like an in-state warrant. A.B. 1242 creates an abortion exemption. A provider cannot produce records if it “knows or should know” that the investigation concerns a “prohibited violation.” (See Sec. 8, at Penal Code 1524.2(c)(1)) A “prohibited violation” is an abortion that would be legal in California but is illegal elsewhere. (See Sec. 2, at Penal Code 629.51(5)) Further, warrants must attest that the investigation does not involve a prohibited violation. (See Sec. 8, at Penal Code 1524.2(c)(2))

Second, an existing California law requires state courts to assist in enforcing out-of-state judicial orders. This is California’s version of the Uniform Law Commission’s (ULC’s) Interstate Depositions and Discovery Act. It requires California court clerks to issue subpoenas on request of litigants that have a subpoena from an out-of-state judge. California lawyers may issue subpoenas in such circumstances, too.

A.B. 2091 and S.B. 107 create new abortion and transgender health exemptions to this existing law:

Third, an existing California law requires health care providers to disclose certain kinds of medical information to certain kinds of entities. A.B. 2091 and S.B. 107 create new abortion and transgender health exemptions to this existing law:

  • Providers cannot release medical information about abortion to law enforcement, or in response to a subpoena, based on either an out-of-state law that interferes with California abortion rights, or a foreign penal civil action. (See A.B. 2091, Sec. 2, at Civil Code 56.108)
  • Providers also cannot release medical information about a person allowing a child to receive gender-affirming care, in response to an out-of-state criminal or civil action against such a person. (See S.B. 107, Sec. 1, at Civil Code 56.109; Sec. 10, at Penal Code 1326(c))

All of these new exemptions from old sharing mandates are important steps forward. But that’s not all these three new California bills do.

New Limits on California Judges

 To protect the privacy of people seeking reproductive health care, these new laws limit the power of California courts to authorize or compel the disclosure of reproductive health data.

First, A.B. 1242 prohibits California judges from authorizing certain forms of digital surveillance, if conducted for purposes of investigating abortions that are legal in California. These are:

  • Interception of wire or electronic communications. (See Sec. 3, at Penal Code 629.52(e)) Interception captures communications content, such as the words of an email.
  • A pen register or trap and trace device. (See Sec. 5, at Penal Code 638.52(m)) These devices capture communications metadata, such as who called whom and when.
  • A warrant for any item. (See Sec. 7, at Penal Code 1524(h)) This would include digital devices that contain evidence of an abortion, such as a calendar entry.

Second, A.B. 1242 prohibits California judges and court clerks from issuing subpoenas connected to out-of-state proceedings about an individual performing, supporting, aiding, or obtaining a lawful abortion in California. (See Sec. 11, at Penal Code 13778.2(c)(2))

Third, A.B. 2091 bars state and local courts from compelling a person to identify, or provide information about, a person who obtained an abortion, if the inquiry is based on either an out-of-state law that interferes with abortion rights, or a foreign penal civil action. This safeguard also applies in administrative, legislative, and other government proceedings. (See Sec. 6, at Health Code 123466(b))

New Limits on California Government Agencies

Government agencies can also be the source of information regarding reproductive and transgender health care. For example, police might be able to identify who traveled to a health care facility, and government facilities can identify who received what care. So the bills create two new limits on disclosure of health care data by California government agencies.

First, A.B. 1242 and S.B. 107 bar all state and local government agencies in California, and their employees, from providing information to any individual or out-of-state agency regarding:

Third, A.B. 2091 bars prison staff from disclosing medical information about an incarcerated person’s abortion, if the request is based on either an out-of-state law that interferes with California abortion rights, or a foreign penal civil action. (See Sec. 8, at Penal Code 3408(r))

New Limit on California Communication Services

Finally, A.B. 1242 provides a new safeguard to protect people from disclosure requests made to a type of company that holds their information. These are California corporations, and corporations with principal offices in California, that provide electronic communication services. They shall not, in California, provide “records, information, facilities, or assistance” in response to out-of-state legal process (such as a warrant or other court order) related to a prohibited violation. (See Sec. 9, at Penal Code 1546.5(a)) The California Attorney General may enforce this rule. (See Sec. 9, at Penal Code 1546.5(b)) However, covered corporations are not subject to any cause of action for providing such assistance in response to such legal process, unless the corporation “knew or should have known” that the legal process related to a prohibited violation. (See Sec. 9, at Penal Code 1546.5(c))

Next Steps

These three new California laws—A.B. 1242, A.B. 2091, and S.B. 107—are strong protections of reproductive and transgender health data privacy. Other pro-choice and pro-trans states should enact similar laws.

More work remains in California. After these important new laws go into effect, we can expect anti-choice sheriffs and bounty hunters to continue seeking abortion-related data located in the Golden State. So will out-of-state officials seeking to punish parents who allow their kids to get gender-affirming health care. California policymakers must be vigilant, and enact new laws as needed. For example, an existing California law, based on another ULC model, authorizes state courts to command a resident to travel out-of-state to testify in a criminal proceeding. This law may also need an exemption for abortion-related and trans-related information. California officials should also work with companies to identify efforts by anti-choice and anti-trans states to circumvent these new protections and use every tool at their disposal to respond.

Adam Schwartz

EFF to NJ court: Give defendants information regarding police use of facial recognition technology

2 months ago

We’ve all read the news stories: study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations. By refusing to disclose the specifics of that process, law enforcement have effectively prevented criminal defendants from challenging the reliability of the technology that ultimately lead to their arrest.

This week, EFF, along with EPIC and NACDL, filed an amicus brief in State of New Jersey v. Francisco Arteaga, urging a New Jersey appellate court to allow robust discovery regarding law enforcement’s use of facial recognition technology. In this case, a facial recognition search conducted by the NYPD for NJ police was used to determine that Francisco Arteaga was a “match” of the perpetrator in an armed robbery. Despite the centrality of the match to the case, nothing was disclosed to the defense about the algorithm that generated it, not even the name of the software used. Mr. Arteaga asked for detailed information of the search process, with an expert testifying the necessity of that material, but the court denied those requests.

Comprehensive discovery regarding law enforcement’s facial recognition searches is crucial because, far from being an infallible tool, the process entails numerous steps, all of which have substantial risk of error. These steps include selecting the “probe” photo of the person police are seeking, editing the probe photo, choosing photo databases to which the edited probe photo is compared, the specifics of the algorithm that performs the search, and human review of the algorithm’s results.

Police analysts often select a probe photo from a video still or a cell phone camera, which are more likely to be low quality. The characteristics of the chosen image, including its resolution, clarity, face angle, lighting, etc. all impact the accuracy of the subsequent algorithmic search. Shockingly, analysts may also significantly edit the probe photo, using tools closely resembling those in Photoshop in order to remove facial expressions or insert eyes, combining face photographs of two different people even though only one is of the perpetrator, using the blur effect to add pixels into a low quality image, using the cloning tool or 3D modeling to add parts of a subject’s face not visible on the original photo. In one outrageous instance, when the original probe photo returned no potential matches by the algorithm, the analyst from the NYPD Facial Identification Section, who thought the subject looked like actor Woody Harrelson, ran another search using the celebrity’s photo instead. Needless to say, these changes significantly elevate the risk of misidentification.

The database of photos to which the probe photo is compared, which could include mugshots, DMV photos or other sources, can also impact the accuracy of the results depending on the population that makes up those databases. Mugshot databases will often include more photos of folks in over-policed communities and the resulting errors in the search is more likely to impact members of those groups.

The algorithms used by law enforcement are typically developed by private companies and are “black box” technology — it is impossible to know exactly how the algorithms reach their conclusions without looking at their source code. Each algorithm is developed by different designers, and trained using different datasets. The algorithms create “templates,” also known as “facial vectors,” of the probe photograph and the photographs in the database, but different algorithms will focus on different points of a face in creating those templates. Unsurprisingly, even when comparing the same probe photo to the same databases, different algorithms will produce different results.

Although human analysts will review the probe photo and candidate list generated by the algorithm for the match to be investigated, numerous studies have shown that humans are prone to misidentifying unfamiliar faces and are subject to the same biases present in facial recognition systems. Human review is also impacted by many other factors, including the analyst’s innate ability to analyze faces, motivation to find a match, fatigue from performing a repetitive task, time limitations, and cognitive and contextual biases.

Despite the grave risk of error, law enforcement remains reticent about its facial recognition systems. In filing this brief, EFF continues to advocate for transparency regarding law enforcement technology.

Hannah Zhao

Victory! Court Unseals Records Showing Patent Troll’s Shakedown Efforts

2 months ago

EFF has prevailed in a years-long effort to make public a series of court records that show how a notorious patent troll, Uniloc, uses litigation threats to extracts payments from a variety of businesses.

Uniloc earlier this month complied with a federal district court’s unsealing order by making public redacted versions of several previously sealed documents. That ended more than three years’ worth of litigation, including two appeals, in which EFF sought public access to judicial records in a case between Uniloc and Apple that was shrouded in secrecy.

The case began in 2018 as an effort to make sense of heavily redacted filings in the patent infringement case between Uniloc and Apple. It resulted in greater transparency into how Uniloc coerces businesses into licensing its weak patents, and shields its activities from public scrutiny by claiming any information about its licenses amounts to a trade secret.

The great majority of Uniloc’s previously secret court records are now public. For instance, a list of Uniloc’s trolling victims, which Uniloc sought to keep entirely under seal, is now more than 80% unredacted. The list of the amounts those companies paid is now more than 70% unredacted. Several other key documents—like the contract between Uniloc and a private equity firm that allowed for this patent trolling expedition in the first place—are entirely public.

Although the court ruling did not require Uniloc to make all of its previously secret records public, we are pleased that it rejected Uniloc’s repeated attempts to get blanket secrecy for licensing information, compelled it to disclose information that should never have been sealed in the first place, and affirmed the public’s right to access records filed in federal court.

Unsealed licensing document shows how Uniloc financed its litigation

EFF’s chief purpose in intervening in this case was to understand Apple’s reasons for arguing that Uniloc’s patent lawsuit should be dismissed. Apple supported its argument with evidence showing Uniloc did not have the legal right to assert the patents it accused Apple of infringing. Although the district court ultimately agreed with Apple and dismissed Uniloc’s suit, the key evidence it relied on remained secret. That evidence included a table showing how much money Uniloc made by extracting license payments from companies. Why did Uniloc create that table? To convince a massive private equity firm, Fortress, to give it money to demand payments from other companies—and sue those who tried to resist.

The court’s most recent order provided the public with the first meaningful look at the licensing table by unsealing the identities of more than 70 companies that paid Uniloc a license as well as the amounts they paid. Although the court agreed to redact the names and payments of a small number of companies that submitted statements to the court, it granted the public access to most of the information in the table as well as the total amount of revenue—$105 million—that Uniloc made from these payments.

None of this information should have ever been sealed. Yet as the below images show, it took years of advocacy by EFF to go from a nearly unreadable document to one that sheds light on how Uniloc obtains license payments. For example, the table shows that sometimes Uniloc licenses patents for as little as $2,500, while Activision Blizzard, Inc. paid $3.5 million for a license. But most payments are for less than $300,000. According to the FTC, when patent owners settle for that little, it’s usually a sign of patent trolling, which occurs when the threat of expensive litigation is used to extract settlements rather than to vindicate a patent infringement claim.

Here's the patent-licensing table that Uniloc filed with the court before the unsealing order:

  uniloc_-_sealed_page_from_licensing_table.png

Here's the same table after the unsealing order: 

uniloc_-_unsealed_page_of_license_table.png Other unsealed records highlight the absurdity of Uniloc’s trade secrets claims

Throughout the transparency fight, Uniloc and Apple argued that any details about the companies that paid Uniloc must remain completely under seal to protect those companies’ trade secrets. The court’s most recent order largely rejected those claims. And newly unsealed written testimony shows that the desire for secrecy of some companies sued by Uniloc was rooted in practical concerns about being targeted by other patent trolls.

As one representative wrote in a declaration, disclosing the entities’ name and how much it paid Uniloc would make it more likely that other patent trolls seeking quick payments would target them in the future.

“We agreed to settle this case and enter this Agreement not because of its merits but because of the high cost of defense and the risk of a trial to our small company,” the representative wrote. “Further legal attacks of this sort are an existential threat to our business and we do not wish to become the target of other Non-practicing entities.”

A representative from another entity forced to pay Uniloc echoed those concerns, writing that “other non-practicing entities would be encouraged by knowledge of [the company’s] settlement with non-practicing entity Uniloc to seek nuisance licenses from [the company] in the future.”

And another company’s representative wrote that “even being identified as a party to the Uniloc Document may result in [the company] being a target of future patent litigation.” Similarly, a different company’s representative wrote that disclosing its identity would make “it a target in future litigation campaigns by non-practicing entities.”

The documents belie the trade secrecy claims advanced by Uniloc and Apple, raising legitimate questions about whether they accurately characterized these companies’ concerns in seeking to keep these records secret. As the above quotes show, their concerns were largely centered on protecting their companies, especially small companies, from further patent trolling. Now we know why Uniloc fought so hard to keep these statements out of public sight.

Court praises EFF for its work to vindicate public access to court records

EFF has long fought to bring greater transparency to patent litigation, as well as supporting proposals to shed light on patent trolls. This transparency effort, however, took a number of twists, including Apple joining with Uniloc in avoiding transparency and a bad decision by the U.S. Court of Appeals for the Federal Circuit that appeared to give Uniloc an opportunity to maintain excessive secrecy.

So we were quite pleased when the district court stood up for the public’s right to access court records and required Uniloc to disclose a number of documents in redacted form (you can view them all here). And we were relieved when Uniloc complied with the court order requiring disclosure instead of challenging it yet again.

But we were also humbled by the court’s recognition of EFF’s years-long advocacy on behalf of the public’s right to understand what’s happening in federal courts.

“The Electronic Frontier Foundation has been of considerable assistance to the Court,” the judge wrote. “The real parties herein have jointly aligned themselves against the public interest and EFF has been of enormous help in keeping the system honest. This order recognizes that assistance and thanks EFF.”

EFF will continue to push back on secrecy claims in patent litigation and elsewhere to ensure that the public is able to access court records and understand how patent trolls misuse our legal system to threaten innovation.

Related Cases: Uniloc v. Apple
Aaron Mackey

Google Loses Appeal Against EU's Record Antitrust Fine, But Will Big Tech Ever Change?

2 months 1 week ago

The EU continues to crack down on big tech companies with its full arsenal of antitrust rules. This month, Google lost its appeal against a record fine, now slightly trimmed to €4.13 billion, for abusing its dominant position through the tactics it used to keep traffic on Android devices flowing through to the Google search engine. The EU General Court largely upheld the EU Commission’s decision from 2018 that Google had imposed unlawful restrictions on manufacturers of Android mobile devices and mobile network operators in order to cement the dominance of its search engine.

Google's defeat comes as no surprise, as the vast majority of consumers in the EU use Google Search and have the Android operating system installed on their phones. The Court found that Google abused its dominant position by, for example, requiring mobile device manufacturers to pre-install Google Search and the Google Chrome browser in order to use Google’s app store. As a result, users got steered away from competing browsers and search engines, Google's search advertising revenue continued to flow unchallenged, and those revenues funded other anticompetitive and privacy-violating practices.

A High Price For Anti-Competitive Behavior: The EU's Digital Markets Act

The General Court ruling, which Google can still appeal to the EU Court of Justice, reiterates a message that is increasingly being voiced in political circles in Brussels: Anti-competitive behavior must come at a high price. The goal is to bring about a change in behavior among large technology companies that control key services such as search engines, social networks, operating systems, and online intermediary services. The recent adoption of the EU’s Digital Markets Act (DMA) is a prime example of this logic: it tackles anticompetitive practices of the tech sector and proposes sweeping pro-competition regulations with serious penalties for noncompliance. Under the DMA, the so-called “gatekeepers”, the largest platforms that control access to digital markets for other businesses, must comply with a list of do’s and don'ts, all designed to remove barriers companies face in competing with the tech giants. 

The DMA reflects the EU Commission’s experience with enforcing antitrust rules in the digital market. Some of the new requirements forbid app stores from conditioning access to the use of the platform’s own payment systems and ban forced single sign-ons. Other rules make it easier for users to freely choose their browser or search engine. The ruling by the General Court in the Google Android case will make it easier for the EU Commission to decide which gatekeepers and services will fall under the new rules and to hold them accountable. 

Will Big Tech Change? Better Tools and Investment Needed

Whether the DMA and confident enforcement actions will actually lead to more healthy competition on the internet remains to be seen. The practices targeted in this lawsuit and in the DMA are some of the most important ways that dominant tech firms raise structural barriers to potential competitors, but other barriers exist as well, including access to capital and programming talent. The success of the EU’s efforts will depend on whether enforcers have the tools to change company practices enough, and in a visible enough way, to encourage investment in new competitors.

Christoph Schmon

Automated License Plate Readers Threaten Abortion Access. Here's How Policymakers Can Mitigate the Risk

2 months 1 week ago

Over the last decade, a vast number of law enforcement agencies around the country have adopted a mass surveillance technology that uses cameras to track the vehicles of every driver on the road, with little thought or respect given to the ways this technology might be abused. Now, in the wake of the U.S. Supreme Court's Dobbs ruling, that technology may soon be turned against people seeking abortions, the people who support them, and the workers who provide reproductive healthcare.

We're talking about automated license plate readers (ALPRs). These are camera systems that capture license plate numbers and upload the times, dates, and locations where the plates were seen to massive searchable databases. Sometimes these scans may also capture photos of the driver or passengers in a vehicle.

Sometimes these cameras are affixed to stationary locations. For example, if placed on the only roads in and out of a small town, a police department can monitor whenever someone enters or leaves the city limits. A law enforcement agency could install them at every intersection on major streets to track a person in real time whenever they pass a camera. 

Police can also attach ALPRs to their patrol cars, then capture all the cars they pass. In some cities police are taught to do "gridding," where they drive up and down every block of a neighborhood to capture data on what cars are parked where. There is also a private company called Digital Recognition Network that has its own contractors driving around, collecting plate data, and they sell that data to law enforcement.

For years, EFF and other organizations have tried to warn government officials that it was only a matter of time before this technology would be weaponized to target abortion seekers and providers. Unfortunately, few would listen, because it seemed unthinkable that Roe v. Wade could be overturned. That was clearly a mistake. Now cities and states that believe abortion access is a fundamental right must move swiftly and decisively to end or limit their ALPR programs.

How ALPR Data Might Be Used to Enforce Abortion Bans

ALPR technology has long been valued by law enforcement because of the lax restrictions on the data. 

Few states have enacted regulations and, consequently, law enforcement agencies collect as much data as possible on everyone, regardless of any connection to a crime, and store it for excessively long periods of time (a year or two years is  common). Law enforcement agencies typically do not require officers to get a warrant, demonstrate probable cause or reasonable suspicion, or show really much proof at all of a law enforcement interest before searching ALPR data. Meanwhile, as EFF has shown through hundreds of public records requests, it is the norm that agencies will share ALPR data they collect broadly with other agencies nationwide, without requiring any justification that the other agencies need unfettered access. Police have long argued that you don't have an expectation of privacy when driving on public streets, conveniently dodging how this data could be used to reveal private information about you, such as when you visit a reproductive health clinic.

That means there's very little to stop a determined police investigator from using either their own ALPR systems to enforce abortion bans or accessing the ALPR databases of another jurisdiction to do so. If a state or city wants to protect the right to seek an abortion, they must ensure that places that have criminalized abortion cannot access their data.

Here are a few examples of how this might play out:

Location Searches: Many ALPR software products, such as Motorola Solutions' Vigilant PlateSearch, offer a "Stakeout" feature, which an investigator can use to search for vehicles seen or regularly seen around a specific location. It would be relatively easy for an investigator to query the address of an abortion clinic to reveal the vehicles of patients, doctors, and others who visit a facility. Once obtained, those license plates could be used to reveal the person's identity through a DMV database. Or the license plates could be entered back into the system to reveal the travel patterns of those vehicles, including where they park at night or whether they crossed state lines. Remember, with so many agencies sharing data across state lines, an investigator in a pro-ban jurisdiction can easily query the data from an agency in a jurisdiction that supports abortion access.

Hot Lists: Most ALPR products used by law enforcement allow officers to create a "hot list," essentially a list of license plates that are under suspicion. Whenever a hot-listed plate is spotted by an ALPR, officers are alerted in real-time of its location. These hot lists are frequently shared across jurisdictions, so that police in one jurisdiction can intercept cars that have been flagged by another jurisdiction.

If a state were to create a registry of pregnant people, they could build a hot list of their license plates to track their movements. If a state has criminalized providing, assisting, or giving material support for out-of-state abortions, investigators could create a hot list of "abetter" vehicles. For example, they could scrape public medical licensing databases, retrieve information from an anti-abortion activism website that publishes dossiers on medical professionals, or infiltrate a private Facebook group to obtain the identifies of members providing resources to abortion seekers. Then they could query DMV databases to obtain the license plates of those individuals. With a hot list of those plates, ban-enforcement investigators would get an alert when a target has crossed into their state and can be intercepted for arrest.

While that might seem a bit far fetched, we would remind policy makers that overturning Roe also once seemed highly unlikely. These are threats we need to address before they become an everyday reality.

What Policy Makers Can Do About ALPR

Through EFF's Atlas of Surveillance project, we have identified nearly 1,000 law enforcement agencies using ALPRs, but we believe this to be a significant undercount. In California, which has taken a hardline stance in favor of abortion access, at least 260 agencies are using ALPRs.

Policymakers in states that support abortion access may be looking for easy solutions. The good news is there is one super easy and instant way to protect data: don't use ALPRs at all. A prosecutor bent on prosecuting abortions can't access your data if you don't collect it.

Unfortunately, few lawmakers have found the courage to take such a solid, strong stance for the privacy rights of their constituents when it comes to ALPRs. And so, we have compiled a few other mitigation methods that lawmakers and agencies can consider.

1. Forbid ALPR Data for Ban Enforcement. Government agencies should explicitly prohibit the use of their ALPR data for abortion ban enforcement, as the city of Nashville recently did. An agency that seeks to protect abortion access could even go so far as to declare using data for ban enforcement as a form of official "misuse," subject to penalties. Another approach is to limit ALPR use to only certain, very specific serious felonies. 

California state law also requires agencies to only use ALPR data in ways that are consistent with privacy and civil liberties. Since abortion access has long been a privacy right in California, agencies should already be doing this.

2. Limit Sharing with External Agencies. Governments should prohibit sharing with external agencies, especially agencies in other states, in order to protect abortion seekers crossing state lines and to protect providers in their state from being investigated by other states. EFF research has found that agencies will frequently give hundreds of other agencies across the country open access to their ALPR databases. Pro-choice municipalities in states with bans should also ensure their data is not being shared with neighboring law enforcement agencies.

An agency that wants to access ALPR data should be required to sign a binding agreement that it will not use data for abortion ban enforcement. Violations of this agreement should result in an agency's access being permanently revoked.

In California, it is illegal for agencies to share ALPR data out of state; nevertheless, many agencies are careless and do not vet the agencies they share with. EFF and the ACLU of Northern California successfully sued the Marin County Sheriff's Office on behalf of community activists over this very issue in a case settled earlier this year. 

On a similar note, law enforcement agencies should not accept hot lists from any agency that has not agreed—in writing—to prohibit the use of ALPR data for abortion ban enforcement. Otherwise, a law enforcement agency in a pro-choice jurisdiction risks alerting an anti-choice jurisdiction of the whereabouts of abortion seekers or reproductive health providers.

3. Reduce the Retention Period. Governments should reduce the retention period dramatically. Many agencies hold data for a year, two years, or even five years. There's really no reason for this. Agencies should consider taking New Hampshire's lead and reduce the retention period to three minutes, except for vehicles already connected to a non-abortion-related crime

4. No ALPRs Near Reproductive Health Facilities. Law enforcement agencies should not install ALPRs near reproductive health facilities. Agencies should either prohibit their officers from using patrol-vehicle mounted ALPRs to canvass areas around reproductive health facilities, or require them to turn ALPRs off when approaching an area with such a facility.

5. Mitigate the Risk of Third Party Hosting. Agencies should be aware of the risks when they store ALPR data with a cloud service provider. Investigators enforcing an abortion ban may go straight to the cloud service provider with legal process to access ALPR data when they think a pro-choice agency won't voluntarily provide it. Addressing this is complicated and will depend on the resources available to the law enforcement agency. At a minimum, an agency should implement sufficient encryption practices that only allow the intended user to access ALPR data and prevent third parties, such as vendor employees and other unauthorized parties, from accessing the data. One avenue to explore is locally hosting the ALPR data on servers controlled by the agency, or by a collaborative network of like-minded local agencies. However, agencies should be careful to ensure they are capable of implementing cybersecurity best practices and standards, including encryption and employing staff who are qualified to protect against ever-evolving security threats. Another option is to seek a cloud provider that offers end-to-end encryption, so that the company's employees can't access the encrypted data. This may result in an necessary tradeoff of some software features to protect targeted or vulnerable populations, such as abortion seekers.

6. Extra Scrutiny for External Requests for Assistance. Even if a law enforcement agency cuts off other agencies' direct access to ALPR data, they may still receive requests for assistance in investigations. Officials must scrutinize these requests closely, since the language used in the request may intentionally obfuscate the connection to an abortion ban. For example, what may be described as a kidnapping or attempted murder may actually be an attempt at abortion ban enforcement from a state with a fetal personhood law. Agencies can try to address this by requiring the requestor to attest that the investigation does not concern abortion.

7. Training. Agencies should ensure that reproductive rights are explicitly covered in all ALPR training (and, for that matter, all training regarding surveillance data). Agencies should not allow ALPR vendors to provide the training courses, since many of these companies sell their products (and the promise of interagency data sharing) to law enforcement agencies in abortion-ban jurisdictions.

8. Robust Audits. Agencies should already be conducting strong and thorough audits of ALPR systems, including data searches. These audits should include examining all searches for potential impacts on access to reproductive healthcare. No user should be able to access the system without documenting the reason and, when applicable, the case or incident number, for each search of an ALPR system or hot list addition.

Protecting ALPR-Adjacent Data 

In order for ALPR data to be useful, law enforcement agencies often must also access vehicle registration data or criminal justice information systems. 

Pro-choice government officials, particularly at state-level law enforcement agencies and DMVs, must take a hard look at databases that contain information on drivers and vehicles and how that data is shared out of state, and prohibit other states from accessing that data for abortion ban enforcement. If law enforcement in another state refuses to agree to such a restriction, they should no longer have direct access to the system. 

California has already done this in another context. Following the passage of the California Values Act, the California Attorney General defined accessing the statewide law enforcement database for immigration enforcement as misuse. This resulted in revocation of access from a subset of U.S. Immigration and Customs Enforcement that refused to sign an agreement agreeing to this restriction.

The Problem of Commercial ALPRs

Even if a law enforcement agency takes all these precautions, or shuts down its ALPR program, investigators in abortion ban states still have another avenue to obtain ALPR data: private databases.

For example, Digital Recognition Network (DRN Data), a subsidiary of Motorola Solutions, contracts with private drivers (often repossession companies) to collect ALPR data en masse in major cities around the country. If an officer in an abortion ban state wants to look at ALPR data in a state that guarantees abortion access, but can't connect to the official law enforcement databases, they can go to this commercial database to obtain information going back years.

What's worse is that private actors can also access this database. DRN sells access to ALPR data to private investigators, who only need to check a box saying that they're querying the data for litigation development. With the passage of SB 8 in Texas, private actors now have the ability to sue to enforce the state's abortion ban. Unfortunately, anti-abortion activists for years have been compiling their own databases of license plates of abortion providers; now they can use that to query private ALPR databases to surveil abortion seekers and reproductive healthcare providers.

This is a difficult problem to solve, since private ALPR operators have often made First Amendment arguments, asserting a right to photograph license plates and sell that information to subscribers. However, many law enforcement agencies—including major federal agencies—also subscribe to this data. A government agency that purports to support abortion access should consider ending its subscription, since it amounts to subsidizing a surveillance network that will one day, if not already, be used to persecute abortion seekers.

Preventing Predictable Threats

Lawmakers who support reproductive rights need to recognize that abortion access and mass surveillance are incompatible. Years of permitting unrestrained access to privacy-invasive technologies that allow police to collect sensitive data on everyone are the proverbial chickens coming home to roost.

Lawmakers in states like California first saw this happen with surveillance technology turned on immigrant communities. To their credit, they rushed to patch the systems, but they failed to look at the horizon to see what was coming next, such as the persecution of abortion seekers or families of youth seeking gender-affirming healthcare. 

Now these leaders must start undoing the dangerous surveillance systems they've facilitated. They must reject the collect-it-all claims from the law enforcement community that project public safety miracles without surfacing the potential harms. They must start writing future-looking policies for surveillance that anticipate and address the worst case scenarios.

While our guidance above specifically addresses abortion access, we acknowledge a major weakness. The strongest reforms are not piecemeal protections for whichever vulnerable group is under attack at the moment, but a complete overhaul that protects us all. 

Dave Maass

EFF Urges FTC to Address Security and Privacy Problems in Daycare and Early Education Apps

2 months 1 week ago
An EFF study found the apps compromise young children’s data, and current laws don’t address the problem.

SAN FRANCISCO—The Federal Trade Commission must review the lack of privacy and security protections among daycare and early education apps, the Electronic Frontier Foundation (EFF) urged Wednesday in a letter to Chair Lina Khan.

Daycare and preschool applications frequently include notifications of feedings, diaper changes, pictures, activities, and which guardian picked up or dropped off the child—potentially useful features for overcoming separation anxiety of newly enrolled children and their anxious parents.

But EFF Director of Engineering Alexis Hancock’s recent investigation found early education and daycare apps have several troubling security risks. Some allow public access to children’s photos via insecure cloud storage; many have dangerously weak password policies; at least one (Tadpoles for Parents) sends “event” data, including when the app is activated and deactivated, to Facebook; and several enable cleartext traffic that can be exploited by network eavesdroppers.

“Parents find themselves in a bind: either enroll children at a daycare and be forced to share sensitive information with these apps, or don’t enroll them at all,” EFF’s letter to Khan said. “Paths for parents to opt a child out of data sharing are, with rare exception, completely absent.”

“Since parents do not have the tools or proper information to currently assess the privacy and security of their children’s data in daycare and early education apps, the Federal Trade Commission should review the current gaps in the law and assess potential paths to strengthen protections for young children’s data, or investigate other means to improve protections for children’s data in this context,” the letter concludes.

Of 42 daycare apps that privacy experts researched, 13 companies did not specify the data they collect in their privacy policies. In policies of those that do describe data collection processes, most admitted to sharing sensitive information (such as the average number of diaper changes per day) with third parties. Only 10 of the 42 apps stated in their privacy policies that they did not share data with third parties – but seven of those 10 actually were doing so anyway.

Current laws don’t address the problem. The Children’s Online Privacy Protection Act only applies to operators of online services “directed to” children under 13; early education and daycare apps, however, are used solely by adults like teachers. The Family Educational Rights and Privacy Act also falls short: It restricts schools from disclosing students’ “education records” to certain third parties without parental consent, but does not regulate the actions of third parties who may receive that data, such as daycare apps.

For EFF’s letter to Federal Trade Commission Chair Lina Khan: https://eff.org/document/eff-letter-ftc-daycare-apps-9-28-2022

For more on daycare apps’ privacy and security problems: https://www.eff.org/deeplinks/2022/06/daycare-apps-are-dangerously-insecure

Contact:  AlexisHancockDirector of Engineering, Certbot alexis@eff.org WilliamBudingtonSenior Staff Technologistbill@eff.org
Josh Richman

Google’s Perilous Plan for a Cloud Center in Saudi Arabia is an Irresponsible Threat to Human Rights

2 months 1 week ago

On August 9, a Saudi woman was sentenced to 34 years in prison by the Kingdom of Saudi Arabia’s notorious specialized criminal court in Riyadh. Her crime? Having a Twitter account and following and retweeting dissidents and activists.

That same day, a federal jury in San Francisco convicted a former Twitter employee of money laundering and other charges for spying—on behalf of the kingdom—on Twitter users critical of the Saudi government.

These are just the latest examples of Saudi Arabia’s dismal track record of digital espionage, including infiltration of social media platforms, cyber surveillance, repression of public dissent, and censorship of those criticizing the government. Yet, against this backdrop of rampant repression and abusive surveillance, Google is moving ahead with plans to set up, in partnership with the state-owned company Saudi Aramco, a massive data center in Saudi Arabia for its cloud computing platform serving business customers.

These cloud data centers, which already exist in Jakarta, Tel Aviv, Berlin, Santiago, Chile, London, Los Angeles, and dozens of other cities around the world, are utilized by companies to run all aspects of their businesses. They store data, run databases, and provide IT for corporate human resources, customer service, legal, security, and communications departments.

As such, they can house reams of personal information on employees and customers, including personnel files, emails, confidential documents, and more. The Saudi-region cloud center is being developed “with a particular focus on businesses in the Kingdom,” Google said.

With Saudi Arabia’s poor human rights record, it’s difficult to see how or even if Google can ensure the privacy and security of people whose data will reside in this cloud. Saudi Arabia has proven time and again that it exploits access to private data to target activists, dissidents, and journalists, and will go to great lengths to illegally obtain information from US technology companies to identify, locate, and punish Saudi citizens who criticize government policies and the royal family.

Saudi agents infiltrated Twitter in 2014 and used their employee credentials to access information about individuals behind certain Twitter accounts critical of the government, including the account owners’ email addresses, phone numbers, IP addresses and dates of birth, according to the U.S. Department of Justice. The information is believed to have been used to identify a Saudi aid worker who was sentenced to 20 years in prison for allegedly using a satirical Twitter account to mock the government.

Meanwhile, a Citizen Lab investigation concluded with “high confidence” that in 2018, the mobile phone of a prominent Saudi activist based in Canada was infected with spyware that allows full access to chats, emails, photos, and device microphones and camera. And just last week, the wife of slain Saudi journalist Jamal Khashoggi announced that she is suing the NSO Group over alleged surveillance of her through Pegasus spyware. These are just a few examples of the Saudi government’s digital war on free expression.

Human rights and digital privacy rights advocates, including EFF, have called on Google to stop work on the data center until it has conducted a due diligence review about the human rights risks posed by the project, and outlined the type of government requests for data that are inconsistent with human rights norms and should be rejected by the company. Thirty-nine human rights and digital rights groups and individuals outlined four specific steps Google should take to work with rights groups in the region in evaluating the risks its plan imposes on potentially affected groups and develop standards for where it should host cloud services.

Google has said that an independent human rights assessment was conducted for the Saudi cloud center and steps were taken to address concerns, but it has not disclosed the assessment or any details about mitigation, such as what steps it is taking to ensure that Saudi agents can’t infiltrate the center the way they did Twitter, how personal data is being safeguarded against improper access, and whether it will stand up against government requests for user data that are legal under Saudi law but don’t comply with international human rights standards.

“The Saudi government has demonstrated time and again a flagrant disregard for human rights, both through its own direct actions against human rights defenders and its spying on corporate digital platforms to do the same,” the rights groups’ statement says. “We fear that in partnering with the Saudi government, Google will become complicit in future human rights violations affecting people in Saudi Arabia and the Middle East region.”

This isn’t the first time Google’s plans to do business with and profit from authoritarian governments has sparked outrage. In 2018, The Intercept revealed that Google was planning to release a censored version of its search engine service inside China. “Project Dragonfly” was a secretive plan to create a censored, trackable search tool for the Chinese government, raising a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online.

Google eventually backed down, telling Congress that it had terminated Project Dragonfly. Unfortunately, we have seen no signs that Google is reevaluating its plans for the Saudi cloud center, despite the overwhelming evidence that dropping such a trove of potentially sensitive personal data smack dab into a country that has no compunction about accessing, by any means, information so it can identify and punish its critics will almost certainly endanger not only activists but everyday people for merely expressing opinions. 

Indeed, in June company leadership at Alphabet, Google’s parent company, urged shareholders to reject a resolution that would require the company to publish a human rights impact assessment and a mitigation plan for data centers located in areas with significant human rights risks, including Saudi Arabia. It even asked the Securities and Exchange Commission to exclude the resolution from its 2022 proxy statement because, among other things, it has already implemented its essential elements.

But this was hardly the case. Specifically, Google has said it is committed to standards in the United Nations Guiding Principles on Business and Human Rights (UNGP) and the Global Network Initiative (GNI) when expanding into new locations. Those standards require “formal reporting” when severe human rights impacts exist as a result of business operations or operating contexts, transparency with the public, and independent assessment and evaluation of how human rights protections are being met.

Google has done the opposite—it’s claimed to have conducted a human rights assessment for the cloud center in Saudi Arabia and addressed “matters identified” in that review, but has issued no details and no public report.

The shareholder resolution was defeated at Alphabet’s annual meeting. The good news is that a majority (57.6%) of independent shareholders voted in favor of the resolution, demonstrating alignment with rights groups that want Google to do the right thing and show that it knows full well the risks this cloud center poses to human rights in the region by disclosing exactly how it plans to protect people in the face of a government hell-bent on punishing dissent.

If Google can’t live up to its human rights commitments and its claims to have “addressed matters” that literally endanger people’s lives and liberty—and we question whether it can—then it should back off of this perilous plan. EFF and a host of groups around the world and in the region will be watching. 

 

 

Karen Gullo

Ban Government Use of Face Recognition In the UK

2 months 1 week ago

In 2015, Leicestershire Police scanned the faces of 90,000 individuals at a music festival in the UK and checked these images against a database of people suspected of crimes across Europe. This was the first known deployment of Live Facial Recognition (LFR) at an outdoor public event in the UK. In the years since, the surveillance technology has been frequently used throughout the country with little government oversight and no electoral mandate. 

Face recognition presents an inherent threat to individual privacy, free expression, information security, and social justice. It has an egregious history of misidentifying people of color, leading for example to wrongful arrest, as well as failing to correctly identify trans and nonbinary people. Of course, even if overnight the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. 

EFF has spent the last few years advocating for a ban on government use of face recognition in the U.S.–and we’ve watched and helped as many municipalities have, including in our own backyard–but we’ve seen enough of its use in the UK as well. 

That’s why we are calling for a ban on government use of face recognition in the UK. We are not alone. London-based civil liberties group Big Brother Watch has been driving the fight to end government-use of face recognition across the country. Human rights organization Liberty brought the first judicial challenge against police use of live facial recognition, on the grounds that it breached the Human Rights Act 1998. The government’s own privacy regulator raised concerns about the technical bias of LFR technology, the use of watchlist images with uncertain provenance, and ways that the deployment of LFR evades compliance with data protection principles. And the first independent report commissioned by Scotland Yard challenged police use of LFR as lacking an explicit basis and found the technology 81% inaccurate. The independent Ryder Review also recommended the suspension of LFR in public places until further regulations are introduced.

What Is the UK’s Current Policy on Face Recognition? 

Make no mistake: Police forces across the UK, like police in the US, are using live face recognition. That means full-on Minority Report-style real-time attempts to match people’s faces as they walk on the street to databases of photographs, including suspect photos. 

Of the five forces that have used the technology in England and Wales, the silent rollout has been primarily driven by London’s Metropolitan Police (better known as the Met) and South Wales Police, which oversees the over-1-million-person metro area of Cardiff. The technology is often supplied by Japanese tech company NEC Corporation. It scans every face that walks past a camera and checks it against a watchlist of people suspected of crimes or who are court-involved. Successful matches have resulted in immediate arrests. Six police forces in the UK also use Retrospective Facial Recognition (RFR), which compares images obtained by a camera to a police database, but not in real-time. Police Scotland has reported its intention to introduce LFR by 2026. On the contrary, the Police Service of Northern Ireland apparently has not obtained or implemented face recognition to date.

Unfortunately, the expanding roll-out of this dangerous technology has evaded legislative scrutiny through Parliament. Police forces are unilaterally making the decisions, including whether to adopt LFR, and if so, what safeguards to implement. And earlier this year the UK Government rejected a House of Lords report calling for the introduction of regulations and mandatory training to counter the negative impact that the current deployment of surveillance technologies has on human rights and the rule of law. The evidence that the rules around face recognition need to change are there–many are just unwilling to see or do anything about it. 

Police use of facial recognition was subject to legal review in an August 2020 court case brought by a private citizen against South Wales Police. The Court of Appeal held that the force’s use of LFR was unlawful insofar it breached privacy rights, data protection laws, and equality legislation. In particular, the court found that the police had too much discretion in determining the location of video cameras and the composition of watchlists. 

In light of the ruling, the College of Policing published new guidance: images placed on databases should meet proportionality and necessity criteria, and police should only use LFR when other “less intrusive” methods are unsuitable. Likewise, the then-UK Information Commissioner, Elizabeth Denham, issued a formal opinion warning against law enforcement using LFR for reasons of efficiency and cost reduction alone. Guidance has also been issued on police using surveillance cameras, most notably the December 2020 Surveillance Camera Commissioner’s guidance for LFR, and the January 2022 Surveillance Camera Code of Practice for technology systems connected to surveillance cameras. But these do not provide coherent protections on the individual right to privacy.

London’s Met Police 

Across London, the Met Police uses LFR by bringing a van with mounted cameras to a public place, scanning faces of people walking past, and instantly matching those faces against the Police National Database (PND). 

Images on the PND are predominantly sourced from people who have been arrested, which includes many individuals that were never charged or were cleared of committing a crime. In 2019, the PND reportedly held around 20 million facial images. According to one report, 67 people requested that their images be removed from police databases; only 34 requests were accepted; and of those, 14 were declined and the remainder were pending. Yet the High Court informed the police in 2012 that the biometric details of innocent people were unlawfully held on the database. 

This means that once a person is arrested, even if they are cleared, they remain a “digital suspect” having their face searched again and again by LFR. This violation of privacy rights is exacerbated by data sharing between police forces. For example, a 2019 police report detailed how the Met and British Transport Police shared images of seven people with the King’s Cross Estate for a secret use of face recognition between 2016 and 2018.

Between 2016 and 2019, the Met deployed LFR 12 times across London. The first came at Notting Hill Carnival in 2016–the UK’s biggest African-Caribbean celebration. One person was a false positive. Similarly, at Notting Hill Carnival in 2017, two people were falsely matched and another individual was correctly matched but was no longer wanted. Big Brother Watch reported that at the 2017 Carnival, LFR cameras were mounted on a van behind an iron sheet, thus making it a semi-covert deployment. Face recognition software has been proven to misidentify ethnic minorities, young people, and women at higher rates. And reports of deployments in spaces like Notting Hill Carnival–where the majority of attendees are Black–exacerbate concerns about the inherent bias of face recognition technologies and the ways that government use amplifies police powers and aggravates racial disparities.

 After suspending deployments during the COVID-19 pandemic, the force has since resumed its use of LFR across central London. On 28 January 2022–one day after the UK Government relaxed mask wearing requirements–the Met deployed LFR with a watchlist of 9,756 people. Four people were arrested, including one who was misidentified and another who was flagged on outdated information. Similarly, a 14 July 2022 deployment outside Oxford Street tube station reportedly scanned around 15,600 people’s data and resulted in four “true alerts” and three arrests. The Met has previously admitted to deploying LFR in busy areas to scan as many people as possible, despite face recognition data being prone to error. This can implicate people for crimes they haven’t committed. 

The Met also recently purchased significant amounts of face recognition technology for Retrospective Facial Recognition (RFR) to use alongside its existing LFR system. In August 2021, the Mayor of London’s office approved a proposal permitting the Met to expand its RFR technology as part of a four-year deal with NEC Corporation worth £3,084,000. And whilst LFR is not currently deployed through CCTV cameras, RFR compares images from national custody databases with already-captured images from CCTV cameras, mobile phones, and social media. The Met’s expansion into RFR will enable the force to tap into London’s extensive CCTV network to obtain facial images–with almost one million CCTV cameras in the capital. According to one 2020 report, London is the third most-surveilled city in the world, with over 620,000 cameras. Another report claims that between 2011 and 2022, the number of CCTV cameras more than doubled across the London Boroughs. 

While David Tucker, head of crime at the College of Policing, said RFR will be used “overtly,” he acknowledged that the public will not receive advance notice if an undefined “critical threat” is declared. Cameras are getting more powerful and technology is rapidly improving. And in sourcing images from more than one million cameras, face recognition data is easy for law enforcement to collect and hard for members of the public to avoid. 

South Wales Police

South Wales Police were the first force to deploy LFR in the UK. They have reportedly used the surveillance technology more frequently than the Met, with a June 2020 report revealing more than 70 deployments. Two of these led to the August 2020 court case discussed above. In response to the Court of Appeal’s ruling, South Wales Police published a briefing note claiming that it also used RFR to process 8,501 images between 2017 and 2019 and identified 1,921 individuals suspected of committing a crime in the process. 

South Wales Police have primarily deployed their two flagship facial recognition projects, LOCATE and IDENTIFY, at peaceful protests and sporting events. LOCATE was first deployed in June 2017 during UEFA Champions League Final week and led to the first arrest using LFR, alongside 2,297 false positives from 2,470 ‘potential matches’. Similarly, IDENTIFY was launched in August 2017 but utilizes the Custody Images Database and allows officers to retrospectively search CCTV stills or other media to identify suspects.

South Wales Police also deployed LFR during peaceful protests at an arms fair in March 2018. The force convened a watchlist of 508 individuals from its custody database that were wanted for arrest and a further six people that were “involved in disorder at the previous event.” No arrests were made. Similar trends are evident in the United States where face recognition has been used to target people engaging in protected speech, such as deployments at protests surrounding the death of Freddie Gray. Free speech and the right to protest are essential civil liberties and government use of face recognition at these events discourages free speech, harms entire communities, and violates individual freedoms. 

In 2018 the UN Special Rapporteur on the right to privacy criticized the Welsh police’s use of LFR as unnecessary and disproportionate, and urged the government and police to implement privacy assessments prior to deployment to offset violations on privacy rights. The force maintains that it is “absolutely convinced that Facial Recognition is a force for good in policing in protecting the public and preventing harm.” This is despite face recognition getting worse as the number of people in the database increases as when the likelihood of similar faces increases, matching accuracy decreases. 

The Global Perspectives 

Previous legislative initiatives in the UK have fallen off the policy agenda, and calls from inside Parliament to suspend LFR pending legislative review have been ignored. In contrast, European policymakers have advocated for an end to government use of the technology. The European Parliament recently voted overwhelmingly in favor of a non-binding resolution calling for a ban on police use of facial recognition technology in public places. In April 2021, the European DataProtection Supervisor called for a ban on the use of AI for automated recognition of human features in publicly accessible spaces as part of the European Commission’s legislative proposal for an Artificial Intelligence Act. Likewise, in January 2021 the Council of Europe called for strict regulation of the tech and noted in their new guidelines that face recognition technologies should be banned when used to solely determine a person’s skin color, religious or other belief, sex, racial or ethnic origin, age, health, or social status. Civil liberties groups have also called on the EU to ban biometric surveillance on the grounds of inconsistencies with EU human rights.

The United States Congress continues to debate ways of regulating government use of face surveillance. Also, U.S. states and municipalities have taken it upon themselves to restrict or outright ban police use of face recognition technology. Cities across the United States, large and small, have stood up to this invasive technology by passing local ordinances banning its use. If the UK passes strong FRT rules, they would be an example for governments around the world including the United States.

Next Steps

Face recognition is a dangerous technology that harms privacy, racial justice, free expression, and information security. And the UK’s silent rollout has facilitated unregulated government surveillance of this personal biometric data. Please join us in demanding a ban on government use of face recognition in the UK. Together, we can end this threat.

Paige Collings

Study of Electronic Monitoring Smartphone Apps Confirms Advocates’ Concerns of Privacy Harms

2 months 1 week ago

Researchers at the University of Washington and Harvard Law School recently published a groundbreaking study analyzing the technical capabilities of 16 electronic monitoring (EM) smartphone apps used as “alternatives” to criminal and civil detention. The study, billed as the “first systematic analysis of the electronic monitoring apps ecosystem,” confirmed many advocates’ fears that EM apps allow access to wide swaths of information, often contain third-party trackers, and are frequently unreliable. The study also raises further questions about the lack of transparency involved in the EM app ecosystem, despite local, state, and federal government agencies’ increasing reliance on these apps.

As of 2020, over 2.3 million people in the United States were incarcerated, and an additional 4.5 million were under some form of “community supervision,” including those on probation, parole, pretrial release, or in the juvenile or immigration detention systems. While EM in the form of ankle monitors has long been used by agencies as an “alternative” to detention, local, state, and federal government agencies have increasingly been turning to smartphone apps to fill this function. The way it works is simple: in lieu of incarceration/detention or an ankle monitor, a person agrees to download an EM app on their own phone that allows the agency to track the person’s location and may require the person to submit to additional conditions such as check-ins involving face or voice recognition. The low costs associated with requiring a person to use their own device for EM likely explains the explosion of EM apps in recent years. Although there is no accurate count of the total number of people who use an EM app as an alternative to detention, in the immigration context alone, today nearly 100,000 people are on EM through the BI Smartlink app, up from just over 12,000 in 2018. Such a high usage calls for a greater need for public understanding of these apps and the information they collect, retain, and share.

Technical Analysis

The study’s technical analysis, the first of its kind for these types of apps, identified several categories of problems with the 16 apps surveyed. These include privacy issues related to the permissions these apps request (and often require), concerns around the types of third-party libraries and trackers they use, who they send data to and how they do it, as well as some fundamental issues around usability and app malfunctions.

Permissions

When an app wants to collect data from your phone, e.g. by taking a picture with your camera or capturing your GPS location, it must first request permission from you to interact with that part of your device. Because of this, knowing which permissions an app requests gives a good idea for what data it can collect. And while denying unnecessary requests for permission is a great way to protect your personal data, people under EM orders often don’t have that luxury, and some EM apps simply won’t function until all permissions are granted.

Perhaps unsurprisingly, almost all of the apps in the study request permissions like GPS location, camera, and microphone access, which are likely used for various check-ins with the person’s EM supervisor. But some apps request more unusual permissions. Two of the studied apps request access to the phone’s contacts list, which the authors note can be combined with the “read phone state” permission to monitor who someone talks to and how often they talk. And three more request “activity recognition” permissions, which report if the user is in a vehicle, on a bicycle, running, or standing still.

Third-Party Libraries & Trackers

App developers almost never write every line of code that goes into their software, instead depending on so-called “libraries” of software written by third-party developers. That an app includes these third-party libraries is hardly a red flag by itself. However, because some libraries are written to collect and upload tracking data about a user, it’s possible to correlate their existence in an app with intent to track, and even monetize, user data.

The study found that nearly every app used a Google analytics library of some sort. As EFF has previously argued, Google Analytics may not be particularly invasive if it were only used in a single app, but when combined with its nearly ubiquitous use across the web, it provides Google with a panoptic view of individuals’ online behavior. Worse yet, the app Sprokit “appeared to contain the code necessary for Google AdMob and Facebook Ads SDK to serve ads.” If that is indeed the case, Sprokit’s developers are engaging in an appalling practice of monetizing their captive audience.

Information Flows

The study aimed to capture the kinds of network traffic these apps sent during normal operation, but was limited by not having active accounts for any of the apps (either because the researchers could not create their own accounts or did not do so to avoid agreeing to terms of service). Even still, by installing software that allows them to snoop on app communications, they were able to draw some worrying conclusions on a few studied apps.

Nearly half of the apps made requests to web domains that could be uniquely associated with the app. This is important because even though those web requests are encrypted, the domain they were addressed to is not, meaning that whoever controls the network a user is on (e.g. coffee shops, airports, schools, employers, Airbnb hosts, etc) could theoretically know if someone is under EM. One app which we’ve already mentioned, Sprokit, was particularly egregious with how often it sent data: every five minutes, it would phone home to Facebook’s ad network endpoint with numerous data points harvested from phone sensors and other sensitive data.

It’s worth reiterating that, due to the limitations of the study, this is far from an exhaustive picture of each EM app’s behavior. There are still a number of important open questions about what data they send and how they send it.

App Bugs and Technical Issues

As with any software, EM apps are prone to bugs. But unlike other apps, if someone under EM has issues with their app, they’re liable to violate the terms of their court order, which could result in disciplinary action or even incarceration—issues that those who’ve been subjected to ankle monitors have similarly faced.

To study how bugs and other issues with EM apps affected the people forced to use them, the researchers performed a qualitative analysis of the apps’ Google Play store reviews. These reviews were, by a large margin, overwhelmingly negative. Many users report being unable to successfully check-in with the app, sometimes due to buggy GPS/facial recognition, and other times due to not receiving notifications for a check-in. One user describes such an issue in their review: “I’ve been having trouble with the check-ins not alerting my phone which causes my probation officer to call and threaten to file a warrant for my arrest because I missed the check-ins, which is incredibly frustrating and distressing.”

Privacy Policies

As many people who use online services and mobile apps are aware, before you can use a service you often have to agree to a lengthy privacy policy. And whether or not you’ve actually read it, you and your data are bound by its terms if you choose to agree. People who are under EM, however, don’t get a say in the matter: the terms of their supervision are what they’ve agreed to with a prosecutor or court, and often those terms will force them to agree to an EM app’s privacy policy.

And some of those policies include some heinous terms. For example, while almost all of the apps’ privacy policies contained language about sharing data with law enforcement to comply with a warrant, they also state reasons they’d share that data without a warrant. Several apps mention that data will be used for marketing. One app, BI SmartLINK, even used to have conditions which allowed the app’s developers to share “virtually any information collected through the application, even beyond the scope of the monitoring plan.” After these conditions were called out in a publication by Just Futures Law and Mijente, the privacy policy was taken down.

Legal Issues 

The study also addressed the legal context in which issues around EM arise. Ultimately, legal challenges to EM apps are likely to be difficult because although the touchstone of the Fourth Amendment’s prohibition against unlawful search and seizure is “reasonableness,” courts have long held that probationers and parolees have diminished expectations of privacy compared to the government’s interests in preventing recidivism and reintegrating probationers and parolees into the community.

Moreover, the government likely would be able to get around Fourth Amendment challenges by claiming that the person consented to the EM app. But as we’ve argued in other contexts, so-called “consent searches” are a legal fiction. They often occur in high-coercion settings, such as traffic stops or home searches, and leave little room for the average person to feel comfortable saying no. Similarly, here, the choice to submit to an EM app is hardly a choice at all, especially when faced with incarceration as a potential alternative.

Outstanding Questions

This study is the first comprehensive analysis into the ecosystem of EM apps, and lays crucial groundwork for the public’s understanding of these apps and their harms. It also raises additional questions that EM app developers and government agencies that contract with these apps must provide answers for, including:

  • Why EM apps request dangerous permissions that seem to be unrelated to typical electronic monitoring needs, such as access to a phone’s contacts or precise phone state information
  • What developers of EM apps that lack privacy policies do with the data they collect
  • What protections people under EM have against warrantless search of their personal data by law enforcement, or from advertising data brokers buying their data
  • What additional information will be uncovered by being able to establish an active account with these EM apps
  • What information is actually provided about the technical capabilities of EM apps to both government agencies contracting with EM app vendors and people who are on EM apps 

The people who are forced to deal with EM apps deserve answers to these questions, and so does the general public as the adoption of electronic monitoring grows in our criminal and civil systems.

Saira Hussain
Checked
2 hours 29 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed