Inside Fog Data Science, the Secretive Company Selling Mass Surveillance to Local Police

3 months ago

This article is part of EFF’s investigation of location data brokers and Fog Data Science. Be sure to check out our issue page on Location Data Brokers.

A data broker has been selling raw location data about individual people to federal, state, and local law enforcement agencies, EFF has learned. This personal data isn’t gathered from cell phone towers or tech giants like Google — it’s obtained by the broker via thousands of different apps on Android and iOS app stores as part of the larger location data marketplace.

The company, Fog Data Science, has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and associate. Fog sells access to this data via a web application, called Fog Reveal, that lets customers point and click to access detailed histories of regular people’s lives. This panoptic surveillance apparatus is offered to state highway patrols, local police departments, and county sheriffs across the country for less than $10,000 per year.

The records received by EFF indicate that Fog has past or ongoing contractual relationships with at least 18 local, state, and federal law enforcement clients; several other agencies took advantage of free trials of Fog’s service. EFF learned about Fog after filing more than 100 public records requests over several months for documents pertaining to government relationships with location data brokers. EFF also shared these records with The Associated Press.

Troublingly, those records show that Fog and some law enforcement did not believe Fog’s surveillance implicated people’s Fourth Amendment rights and required authorities to get a warrant. 

In this post, we use public records to describe how Fog’s service works, where its data comes from, who is behind the company, and why the service threatens people’s privacy and safety. In a subsequent post, we will dive deeper into how it is used by law enforcement around the country and explore the legal issues with its business model.

How does the service work?

In materials provided to law enforcement, Fog states that it has access to a “near real-time” database of billions of geolocation signals derived from smartphones. It sells subscriptions to a service, which the company usually billed as “Fog Reveal,” that lets law enforcement look up location data in its database through a website. The smartphone signals in Fog’s database include latitude, longitude, timestamp, and a device ID. The company can access historical data reaching back to at least June 2017.

Fog’s materials describe how users can run two different queries:

  1. “Area searches”: This feature allows law enforcement to draw one or more shapes on a map and specify a time range they would like to search. The service will show a list of all cell-phone location signals (including location, time, and device ID) within the specified area(s) during that time. The records EFF obtained do not say how large an area Fog’s Area searches are capable of covering with a single query.
  2. “Device searches”: Law enforcement can specify one or more devices they’ve identified and a time range, and Fog Reveal will return a list of location signals associated with each device. Fog’s materials describe this capability as providing a person’s “pattern of life,” which allows authorities to identify “bed downs,” presumably meaning where people sleep, and “other locations of interest.” In other words, Fog’s service allows police to track people’s movements over long periods of time. 

Fog Reveal is typically licensed for a year at a time, and records show that over time the company has charged police agencies between $6,000 - $9,000 a year. That basic service tier typically includes 100 queries per month, though Fog sells additional monthly query allocations for an additional fee. For example, in 2019, the California Highway Patrol paid $7,500 for a year of access to Reveal plus $2,400 for 500 more queries per month.

Fog states that it does not collect personally identifying information (for example, names or email addresses). But Fog allows police to track the location of a device over long stretches of time — several months with a single query — and Fog touts the use of its service for “pattern of life” analyses that reveal where the device owner sleeps, works, studies, worships, and associates. This can tie an “anonymous” device to a specific, named individual.

Together, the “area search” and the “device search” functions allow surveillance that is both broad and specific. An area search can be used to gather device IDs for everyone in an area, and device searches can be used to learn where those people live and work. As a result, using Fog Reveal, police can execute searches that are functionally equivalent to the geofence warrants that are commonly served to Google. 

This service could be used to determine who was near the scene of a violent crime around the time it was committed. It also could be used to search for visitors to a Planned Parenthood or an immigration law office on a specific day or everyone who attended a protest against police violence.

Image from Fog’s marketing brochure, sent to North Dakota and Chino, CA, which appears to show a single location signal as viewed with Fog’s service.

The basics of Fog’s services are laid out in a marketing brochure which was sent to several prospective customers. The brochure explains that Fog’s “unique, proprietary and patented data platform” processes data from “hundreds of millions of mobile devices” and can deliver “both forensic and predictive analytics and near real-time insights on the daily movements of the people identified with those mobile devices[.]” The materials state that Fog’s collection of people’s location data is “100% Opt-in. All users opt-in to location data collection,” though as we will discuss later, this claim is hard to take at face value.

At the core of Fog’s pitch is a series of claims about the breadth and depth of its location data. It claims to process over 250 million devices per month within the United States. (There are an estimated 301 million mobile devices nationally). According to Fog, these devices generate 15 billion signals per day, or over 5 trillion per year.

 

Excerpt from Fog’s marketing brochure describing the properties of its dataset

EFF could not verify Fog’s claims. But there is reason to be skeptical: Thanks to the nature of its data sources, it’s likely that Fog can only access location data from users while they have apps open, or from a subset of users who have granted background location access to certain third-party apps. Public records indicate that some devices average several hundred pings per day in the dataset, while others are seen just a few times a day. Users who do not install many third-party apps, or who have opted out of tracking via Apple’s App Tracking Transparency (ATT), may not be present in the dataset at all. 

Additionally, the records EFF reviewed show that several of the agencies that worked with Fog have since canceled their subscriptions, and at least one said they were not sure if they ever used Fog to successfully solve a case. Those potential shortcomings are not a reason to underestimate Fog’s invasiveness or its capability for unfettered dragnet monitoring. But it’s important to understand its limits. Fog's data may be patchy and incomplete, with data about some people some of the time. But if we take Fog’s claims at face value, it would mean that the company collects the location data of a majority of people in the United States on a monthly basis. This means Fog may have limits in its ability to locate any given person at a specific moment in time. But Fog’s service may still be capable of identifying a significant portion of the hundreds of attendees at a protest or other sensitive location.

The brochure gives some insight into how Fog intends for its service to be used. It lists a series of “use cases” from the dramatic (“Human Trafficking,” “Terrorism Investigations,” “Counter-Intelligence”) to the more mundane (“Drug Investigations,” “Soft Target Protection”). It seems to be aimed at both local law enforcement and at intelligence/homeland security agencies. 

The language used in the document often invokes terms used by intelligence agencies. For example, a core advertised feature is the ability to run a “pattern of life analysis,” which is what intelligence analysts call a profile of an individual’s habits based on long-term behavioral data. Fog Reveal is also “ideal for tipping and cueing,” which means using low-resolution, dragnet surveillance to decide where to perform more targeted, high-resolution monitoring. The brochure also includes a screenshot of Fog Reveal being used to monitor “a location at the US/Mexico border,” and an alternate version of the brochure listed “Border Security/Tracking” as a possible use case. As we will discuss in our next post, records show that Fog has worked with multiple DHS-affiliated fusion centers, where local and federal law enforcement agencies share resources and data.

In other materials, Fog emphasizes the convenience of its service. An email titled “Solve crimes faster: Here’s how” reads:

Find strong leads at your desk in minutes. Just type in a location, date and time, then watch app signals disclose what mobile devices were present at the crime scene. We’d love to help your department save time and money too. Let’s schedule a 10-minute demo next week.

Fog’s Reveal customers are given direct access to raw location data, which can be exported from the web portal into portable formats for processing elsewhere. Fog emphasizes that its license permits “processing, analysis, and sub-licensing of location data,” potentially allowing law enforcement to share the data with private contractors. Fog routinely encouraged law enforcement agencies to share one license among multiple users, and some customers used Fog to run queries on behalf of other law enforcement agencies on request.

Fog claims that it only sells its Reveal service to law enforcement agencies. But Fog’s materials also advertise “out-sourced analytic services” for non law enforcement customers, including “private sector security clients.” An email exchange between Fog and Iowa police appears to corroborate this policy: Fog says it will not grant private companies direct access to its database, but it will perform analysis on behalf of “law firms and investigative firms.” According to a brochure, this analysis may include:

  • Verifiable presence at a location on a specific date and time
  • Likely locations for residences, places of business and frequent activities
  • Links to other individuals, places and devices
  • Patterns of activity correlating to certain events, times or alibis

In other words, Fog advertises that it can use its data to surveil the private lives of individuals on behalf of private companies. The records EFF has obtained do not provide any details about specific relationships Fog has with any private-sector clients. 

Where does the data come from?

The kind of data that Fog sells to law enforcement originates from third-party apps on smartphones. Apps that have permission to collect a user’s location can share that data with third-party advertisers or data brokers in exchange for extra ad revenue or direct payouts. Downstream, data brokers collect data from many different apps, then link the different data streams to individual devices using advertising identifiers. Data brokers often sell to other data brokers, obfuscating the sources of their data and the terms on which it was collected. Eventually, huge quantities of data can end up in the hands of actors with the power of state violence: police, intelligence agencies, and the military.

Over the past few years, journalists have uncovered several links between private brokers of app-derived location data and the US government. Babel Street, best known for its open-source intelligence (OSINT) tools for analyzing social media and the like, sells location data as part of a secret add-on service called “Locate X.” Venntel, a subsidiary of marketing data company Gravy Analytics, has sold raw location data to several different US agencies, including ICE, Customs and Border Protection (CBP), and the FBI. And broker X-Mode paid app developers around 3 cents per user per month for access to location data, then sold it directly to defense contractors.

Enter Fog Data Science. Like the other companies, Fog buys data from the private market and packages it for use by law enforcement. Unlike most others, Fog seems to target smaller agencies. Venntel has sold a year’s worth of data to the Department of Homeland Security for more than $650,000; meanwhile, Fog sold its service to the sheriff of Washington County, OH, for $9,000 a year. While Venntel, Babel Street, and Anomaly 6 have made headlines for dealings with three-letter federal agencies, public records show that Fog appears to have targeted its business at local, regional, and state law enforcement. That is, Fog sells its services to police agencies that most Americans are far more likely to interact with than federal law enforcement. The records received by EFF confirm past or ongoing contractual relationships with at least 18 state and local law enforcement clients; several other agencies took advantage of free trials of Fog’s service. Notes from one agency’s meeting with Fog state that the company works with “50-60” agencies nationwide.

So where, exactly, does Fog’s data come from? The short answer is that we don’t know for sure. Several records explain that Fog’s data is sourced from apps on smart phones and tied to mobile advertising identifiers, and one agency relayed that Fog gathers data from “over 700 apps.” Fog officials have referred to a single “data provider” in emails and messages within Fog Reveal. One such message explained that the data provider “works with multiple sources to ensure adequate worldwide coverage,” and that a “newly added source” was causing technical issues.

But when asked about which apps or companies originate its data, Fog has demurred. Some answers implied that Fog itself might not know. In July 2020, Mark Massop responded to a point-blank question from the Chino police that “Our data provider protects the sources of data that they purchase from.” Massop did say that at least two sources were not included in Fog’s dataset: Twitter and Facebook. Separately, a Santa Clara County attorney wrote that Fog gets information from “lots of smaller apps,” but not Google or Facebook.

Another document, shared in 2019 with the city of Anaheim, CA, says that Fog’s portal uses “unstructured geo-spatial data emanating from open apps (Starbucks, Waze, etc.)” It’s unclear whether this means that Fog actually receives data from the apps listed, or whether Starbucks and Waze are simply examples of “open apps” that could be sharing data. On Android, both Starbucks and Waze (which is owned by Google) have access to location permissions, and both apps use third-party advertising or analytics services. Waze was also mentioned in a presentation about Fog’s capabilities to the Greensboro, NC police, according to Davin Hall, a former data analyst for the department interviewed by EFF. Per Hall, “Waze got brought up a lot” in the context of apps that could share data with Fog. “It got mentioned because it was a common one for people to have up while they were driving around, so it would be pinging regularly so that you could see the movement of the device actively,” he said.

The document further claims that Fog’s competitors all buy their data from a single source, and that Fog has a unique and privileged relationship as an “associate” of that source. 

[The use of app-based location data] for Law Enforcement and Intelligence Analysis purposes is limited to only a few carriers. Currently, these carriers purchase their source of data from an associate company of FOG Data Science. As non-associates, they are charged a much higher premium to purchase the data, thereby forcing higher prices for their products. […]

Additionally, [FOG’s] direct access to, and association with, the database vendor allows it to offer low prices both per seat license and per additional query. 

This implies that Fog’s data provider was, to its knowledge, the sole upstream source of app-based location data for all law enforcement and intelligence clients.

Links to Venntel

Other documents suggest that the “associate company” referenced in the Anaheim document — and the source of Fog’s data — is Venntel, perhaps the largest seller of location data to the government.

The most direct link comes from an email exchange with the Iowa Department of Public Safety. In response to an Iowa intelligence analyst’s question about Fog’s data, a Fog representative said it would ask “our data partner” for assistance. Fog then forwarded the question (including a device identifier) to a representative of Venntel, who sent back a series of screenshots illustrating how the analyst should interpret the data. 

There are other links between Fog and Venntel.

The marketing materials provided by Fog to multiple law enforcement agencies are nearly identical to material that Venntel provided to DHS, according to records obtained by ACLU. The style, much of the language, and several of the graphics appear to be identical. It even appears that both companies use the same screenshot of a location in Santa Teresa, NM to illustrate their capabilities. Furthermore, both companies make identical claims about their data coverage, including that they analyze “location signals from 250 million mobile devices in the U.S.” and “15+ billion daily location signals.” These claims could be evidence that both companies have access to the same dataset.

Other records connect the two companies as well. One of the first records EFF received was a version of Fog’s Software License Agreement (SLA) from the Missouri State Highway Patrol. A piece of text in the header—edited to be hidden in the final document, but not deleted—reads “Venntel Analytics, Inc. Event Data Licensing Agreement.” .

Finally, our investigation into the code hosted at fogreveal.com turned up several literal links to Venntel. Many different URLs with the word “Venntel” in their path are referenced in the code. For example, when a Reveal user performs any geofenced device query, that query is submitted by sending a request to the url path “/Venntel/GetLocationData.”

This collection of evidence suggests that Venntel is Fog’s “associate,” that is, the source of its data. This conclusion would be consistent with Fog’s claim that its “associate” was the only source of data for other law-enforcement-facing location data brokers. Previous reporting has revealed that Venntel supplies data to other brokers, including Babel Street, which sells location data to the government through its secret “Locate X” service.

EFF has redacted this screenshot to remove potentially identifiable information.

Records released to EFF also give us new information about how Venntel works. The screenshots appear to be taken from Venntel’s own web-based portal. It has previously been reported that Venntel lets users search for devices in a specific area, then perform deep dives on specific devices. This functionality parallels Fog Reveal’s “area search” and “device search” capabilities. To our knowledge, this is the first time the public has been able to see what Venntel’s user interface looks like. The interface is similar to Fog’s, though the visual style is slightly different. Venntel’s interface also appears to display more information than Fog’s does, including an IP address associated with each signal. You can read more about how Fog Reveal likely operates in our deep dive into its code.

Consent and Identifiability

In marketing materials and emails, Fog has reassured prospective customers that its data is “100% opt-in” and that “no PII [personally-identifiable information] is ever collected.” But records obtained by EFF and the nature of precise, individualized location data shows that the data is incredibly personal and can easily identify people.

First, Fog’s assertion that the people in its database have “opted in” rests on a legal fiction of consent that EFF, courts, and members of Congress have repeatedly criticized because it fails to adequately protect people’s privacy. Modern smartphones require user consent before allowing certain kinds of data, including location, to be shared with apps. However, phones do very little to limit how the data is used after that permission is obtained. As a result, every permission is an all-or-nothing proposition: when you let a weather app access your location in order to see a five-day forecast, you may also give it the ability to sell, share, and use that data for whatever other purposes it chooses. In the United States, often the only legal limits on an app’s use of data are those it places on itself in a privacy policy. And these policies can be written so vaguely and permissively that there are, functionally, no limits at all.

In other words, even if a user consents to an app collecting location data, it is highly unlikely that they consent to that data winding up in Fog’s hands and being used for law enforcement surveillance.

Fog’s second claim, that its data contains no personally identifying information, is hard to square with common understandings of the identifiability of location data as well as with records showing Fog’s role in identifying individuals. 

Location data is understood to be “personally identifying” under many privacy laws. The Colorado Privacy Act specifically defines “identified individuals” as people who can be identified by reference to “specific geolocation data.” The California Privacy Rights Act considers “precise geolocation data” associated with a device to be “sensitive personal information,” which is given heightened protections over other kinds of personal information. These definitions exist because location data traces can often be tied back to individuals even in the absence of other PII. Academic researchers have shown over and over again that de-identified or “anonymized” location data still poses privacy risks.

Fog’s data can allow police to determine where a person sleeps, works, or worships; where they go to get lunch, or health care, or to unwind on a Friday night. Tying a location trace to a real identity is often more of a mild inconvenience than a serious barrier to police. Fog’s own literature clarifies this: in a PowerPoint presentation shared with Chino, CA, it explains, “While there is no Pll data provided, the ability to identify a location based on a device's signal strength can provide potential identifications when combined with other data that agencies have access to.” After attending a meeting with Fog representatives, a St. Louis County officer summarized: “There is no PI linked to the [device ID]. (But, if we are good at what we do, we should be able to figure out the owner).” 

Furthermore, Fog’s data is directly tied to “hashed” advertising identifiers, and multiple records show how Fog has helped its customers use “device searches” to track devices with specific ad IDs. A phone’s ad ID is available to anyone with access to the device, and ad IDs shared widely among app developers, advertising companies, and data brokers of all stripes. Once an agency has access to a target’s ad ID, they can use Fog to search for a detailed history of that person’s movement.

Emails between Fog and the California Highway Patrol indicate that Fog did not believe the Carpenter v. U.S. decision—which held that law enforcement need a warrant to access cell site location information (CSLI)—applied to their service, and therefore no warrant was required to access the app-based location data that Fog sells. But as we have discussed, Fog’s data is acquired and sold without meaningful consent and can frequently be used to track individuals just as effectively as CSLI. We discuss the legal issues with Fog and what we know about how agencies have treated the law in a subsequent post.

A perfect storm

The market for app-derived location data is massive. Dozens of companies actively buy and sell this data with assistance from thousands more. Many of them put raw data up for sale on the open market. And at least a handful of companies sell this kind of data to the federal government. Despite this, Fog Data Science is the only company EFF is aware of that sells individualized location data to state and local law enforcement in the United States.

Fog’s product represents a direct and uniquely modern threat to our privacy. Its business is only possible because of a cascade of decisions by tech platforms, app developers, lawmakers, and judges, all of whom have failed to adequately protect regular users. Apple and Google have designed their mobile operating systems to support third-party tracking, giving brokers like Fog essential tools like the ad identifier. Thousands of app developers have monetized their software by installing invasive tracking code on behalf of data brokers and ad tech. Congress has repeatedly failed to pass even basic privacy protections, allowing a multibillion dollar data broker industry to operate in the open. And courts have failed to clarify that a person’s Fourth Amendment rights aren’t diminished just because they’re carrying a smartphone that can transmit their location to apps and data brokers.

Fog Reveal can be used to harm vulnerable people and suppress civil liberties. Fog’s area searches can let police perform dragnet surveillance on attendees of peaceful protests, religious services, or political rallies. Some of Fog’s customers already have a history of doing so by other means: an investigation by ACLU revealed how California Highway Patrol used helicopters with high-tech surveillance cameras to capture zoomed-in video of attendees at peaceful demonstrations against police violence. 

Fog’s service is especially dangerous in the wake of the Supreme Court’s Dobbs decision. Many states have criminalized abortion, giving state and local police license to unleash their surveillance powers against people seeking reproductive healthcare as well as the professionals that provide it. Fog Reveal lets an officer sitting at a desk draw geofences around abortion clinics anywhere in the world, then track all devices seen visiting them. 

Finally, Fog’s service is ripe for abuse. The records we received indicated that some agencies required warrants to use Fog in some circumstances but did not show that law enforcement placed any limits on individual officers’ use of the technology, nor that they conducted routine oversight or auditing. It is possible that officers with access to Fog Reveal could misuse it for personal ends, just like some have misused other investigative tools in the past. In June, news broke that a US Marshal is being charged for allegedly using a different geolocation surveillance service in 2018 that was then sold by a prison payphone company — Securus — to track “people he had personal relationships with as well as their spouses.” (The US Marshals have previously contracted with Fog as well.) It’s possible that officers could similarly misuse Fog to surveil people they know.

How to protect yourself

The good news, if any, is that it is relatively straightforward to protect yourself from Fog’s surveillance. Fog relies on data gathered by code embedded in third-party apps. That means you can cut off its supply by revoking location permissions to any apps that you do not completely trust. Furthermore, turning off location services at the operating system level should prevent Fog and other app-based data brokers from accessing your location data at all. (This does not always prevent location data from being gathered by other actors, like your cellular carrier. You can read more about avoiding a range of threats to privacy in one of EFF’s Surveillance Self-Defense guides.) 

There is no evidence that Google Maps, Apple, or Facebook provide data to Fog, and emails from Fog representatives and its customers state that Fog does not gather data from Google or Facebook. While there are other reasons to restrict Google’s access to your location, it does not appear as though data shared exclusively with one of these map providers will end up in Fog’s database.

Finally, evidence suggests that Fog’s service relies on using advertising identifiers to link data together, so simply disabling your ad ID may stymie Fog’s attempts to track you. One email suggests that Apple’s App Tracking Transparency initiative — which made ad ID access opt-in and resulted in a drastic decrease in the number of devices sharing that information — made services like Fog less useful to law enforcement. And former police analyst Davin Hall told EFF that the company wanted to keep its existence secret so that more people would leave their ad IDs enabled. 

You can reset or disable your ad ID by following the instructions here.

Fog and its customers have spent years trying to remain in the shadows. Its service cannot function properly otherwise. Exposed to the light of day, Fog’s product becomes clear: an all-seeing eye that invades millions of Americans’ privacy without warrant or accountability.

Read more about Fog Data Science: 

 

Bennett Cyphers

The SECURE Notarization Act Will Create a Race to the Bottom for Privacy

3 months ago

Earlier this week, EFF, the Center for Democracy and Technology, and Demand Progress sent a letter to the Senate expressing our concerns about H.R. 3962, The Securing and Enabling Commerce Using Remote and Electronic Notarization Act of 2021 or SECURE Notarization Act. This bill would require states to recognize remote online notarizations that meet a weak minimum federal standard—regardless of whether those notarizations meet potentially stronger pre-existing state standards.

This bill fails to require strong minimum privacy standards while simultaneously requiring the collection and retention of personally identifying and necessarily sensitive information that notaries normally wouldn’t collect in the first place. And, crucially, the bill does not prohibit the sale or disclosure of the data collected during an online notarization.

Although remote online notarization may be convenient for some consumers, it is critical that states be able to implement safeguards to protect their residents with respect to the authentication process and the security and retention of private, sensitive information. As currently written, this bill would effectively prevent states from enforcing their own consumer protections for remote online notarizations.

If passed as written, the SECURE Notarization Act would require states to recognize out-of-state notarizations that do not comply with potentially stronger state standards. This encourages a race to the bottom. States will have a clear incentive to establish the weakest possible standards in a bid to attract notary businesses to their state. Not only does this diminish the rights of consumers, but it would also create significant enforcement problems, as states do not have regulatory oversight of out-of-state notaries.

People need to use notaries for some of life’s most significant transactions, such as end-of-life planning and real estate purchases or sales. These transactions have long-lasting consequences for individuals and their families. As COVID-19 has forced millions of consumers and businesses to rethink digital security and identity safeguards, it is critical that Congress consider how to best maintain robust protections necessary to protect against abuse and fraud. Preempting state law is not the answer. 

India McKinney

EFF Calls for Limiting Mandatory Cooperation, Safeguarding Human Rights in International Cybercrime Investigations as Talks Resume for Proposed UN Cybercrime Treaty

3 months ago

In a new round of talks this week to formulate a UN Cybercrime Treaty, EFF is calling for strictly limiting the scope of the convention’s international cooperation provisions and safeguards to ensure that states respect human rights when responding to legal assistance requests.

EFF is among a group of digital and human rights organizations participating in the UN-convened talks, which started in February. A third round of discussions that began yesterday in New York focuses on the scope of the treaty’s cooperation and mutual assistance provisions. The goal is to work towards finding consensus on the extent to which governments should cooperate and provide legal assistance to each other in cybercrime investigations.

On the table are extremely serious and contentious issues about the contours of cooperation among international law enforcement agencies for accessing user data in territories outside their own, respecting existing mutual assistance agreements, navigating national laws governing criminal investigations, and, importantly, ensuring privacy and human rights protections are included and prioritized.

Early work on the treaty has demonstrated that finding consensus about even baseline matters, such as the definition of cybercrime and what crimes should be covered by the convention, is challenging.

The first meeting of the  Ad-Hoc Committee Secretariat from the UN Office on Drugs and Crime, the UN-convened committee of over 100 government officials from around the world charged with drafting the treaty, was four days after Russia’s invasion of Ukraine. Russia was the initial driving force behind the creation of the treaty, leaving Member States questioning whether it could defend claims of sovereignty in formulating cybercrime provisions while invading Ukraine and unleashing cyberattacks.

At the second meeting in June, we were alarmed to see Member States pushing to cover a broad array of crimes under the treaty, including content-related crimes. We adamantly oppose this on grounds that it will likely result in overbroad, easily abused laws sweeping up lawful speech and threatening the free expression rights of people around the world.

In our comments for this week’s third round of talks, EFF outlined a strategy for carefully limiting the scope of cooperation and safeguarding human rights.

Our recommendations comprise three inter-related categories: clear limits on cooperation and providing technical assistance, safeguarding human rights, and guaranteeing nondiscrimination when providing assistance, and the continued use of Mutual Legal Assistance Treaties (MLATs) as the primary approach for legal cooperation.

Scope of Cooperation Should be Carefully Limited

The convention has the potential to substantively reshape international criminal law and bolster cross-border police surveillance powers to access and share users’ data, implicating the human rights of billions of people worldwide. We’ve seen vaguely worded international cybercrime laws misused to violate freedom of expression, target dissenters, and put them in danger. It’s paramount that the proposed treaty does not invite abuse and overreach by countries by obligating states to cooperate in open-ended investigations of ill-defined criminal matters.

Despite the fact that this convention is for cybercrime, some states have argued it should form the basis for international cooperation in evidence gathering for any crime under investigation. The European Union, for example, has put forward compromise language, saying it remains open to the concept of cooperation applying to the collection of evidence in not just serious crimes but any crime—a provision in the Budapest Convention.

But cross-border investigations are intrusive, posing a heightened risk to human rights. We therefore believe the proposed UN treaty should not become a general-purpose investigative tool but should be limited to actual cybercrimes or, at least, to investigations of crimes that states agree are truly serious. We also support Canada’s call for a de minimus clause out of recognition that even violations of serious offences can be trivial, and states should be permitted to refuse investigative assistance in those instances.

Some countries have also argued that the convention should form the basis for assistance outside the criminal justice system. For example, Brazil and Russia see international cooperation as including mutual assistance for investigations and prosecutions in “civil and administrative” cases and other investigations of undefined “unlawful acts.” But the privacy implications of civil or administrative investigations can be substantial and the personal consequences of some civil or administrative matters can be quite severe.

Some states may also wish to use the Convention as a means for cooperation in a range of emerging offensive cybersecurity operations that require intruding onto secure networks in order to interfere with the use of computing devices or communications networks. Such disruptive activities pose a particularly insidious threat to human rights and fall outside the normal parameters of the criminal justice system. They should not be legitimized through an international instrument.

Cooperation Must Safeguard Human Rights

The treaty should also supplement the MLAT system by establishing an adequate baseline of protection to ensure that states respond to legal assistance requests in a manner that respects human rights. Oversight and monitoring mechanisms should be built into the treaty to check whether human rights safeguards are being followed and provide a way to combat and end any abuses.

Interfering with privacy rights when cooperating in international cybercrime investigations should be explicitly prohibited unless subjected to independent authorization concluding that the incursion is likely to yield evidence of a specific crime. Data processing that is not necessary, legitimate, and proportionate, as defined in international human rights law, should also be prohibited, as should any cooperation to prosecute or punish individuals on the basis of race, religion, nationality, gender identity or political opinion.

Regrettably, there is no definitive international mechanism for enforcing human rights. States should therefore be permitted, if not obligated, to carefully and continually scrutinize cross-border access by foreign governments through independent regulators and these regulators should be empowered to correct or even suspend cooperation with any state or agency who fails to adequately safeguard human rights.

Cooperation Must Remain Focused on the MLAT Regime

The primary vehicle through which legal assistance occurs on a global scale is still the MLAT regime, where one state asks another state’s government to use its existing legal powers to help investigate people or evidence in its territory. States have failed to invest sufficiently in the MLAT regime despite growing demand for cross-border investigative assistance, and this has led to calls and attempts at replacement systems for international cooperation.

Unfortunately, most proposals to replace the MLAT system have been accompanied by significant erosions of privacy, data protection, due process, and human rights safeguards.

Rather than throwing out the MLAT system, this treaty should revitalize it by committing governments to invest more resources and training into its operation and creating international knowledge exchange points that would help law enforcement navigate the MLAT systems of other governments.

Our full submission can be found here and a compilation of comments and submissions by Member States can be found here. The Ad-Hoc committee is scheduled to meet through Sept. 9.

Karen Gullo

Over-the-Horizon Drones Line Up But Privacy Is Not In Sight

3 months ago

The Federal Aviation Administration (FAA) will soon rule on Beyond Visual Line of Sight (BVLOS) drones, which are capable of flying while its operator (pilot) is far away. While these types of drones might offer benefits to society—think of deliveries, infrastructure inspection, and precision agriculture—they also pose serious threats to our privacy. The FAA and the BVLOS industry need to meaningfully address the privacy issues that these types of operations pose to people. Do we want a future with private industry flying drones over our heads with no transparency or protections for our privacy?

What Are BVLOS Drones?

Drones are uncrewed aircraft that can either fly autonomously or are remotely operated. Sometimes they are called unmanned aircraft systems (UAS). You have probably seen people flying small drones to take photos or videos of landscapes or events. In the last few years, drones have become very popular among hobbyists for drone racing, video, and photography. In 2016, the FAA published rules as Part 107 and later RemoteID which cover many of these small drones. However, the operator must be within visual line of sight of the drone—that is, you need to be able to see with your own eyes where your drone is.

BVLOS is when operators are not within visual distance of the drone, allowing the drones to fly much longer distances. The pilots could be over the horizon or even on the other side of the world.

BVLOS drones bring new challenges to operations. For example, since BVLOS drones fly longer distances, maintaining communication and control of the drone is even more important. Also, because you are not there to see where it is and have a full view of the airspace around the drone, maintaining awareness of the surrounding airspace becomes more challenging.  For these and many other reasons, BVLOS drones require their own rules, and will be flown mostly by industry.

If you want to fly a BVLOS drone, there are no rules yet, so you would have to apply for a waiver with the FAA. Because of this uncertainty, industry has been asking the FAA for BVLOS rules.  And so, the FAA convened an Aviation Rulemaking Committee (ARC) on BVLOS drones to start working on these rules.

What Are the Privacy Threats?

BVLOS rules will open the doors for larger commercial operations to fill the skies with long-distance drones. They will deliver merchandise and inspect equipment. Many communities will be impacted by multiple BVLOS drones flying over their heads, with all types of sensors that collect all kinds of data.

Because drones are flying over your house and other property at low altitude, the traditional barriers like fences to your privacy do not apply.  Drones can see into your backyard and they can have better views of your private life.

With drones potentially flying over your home multiple times per day, industry can have a better sense over the chronology of your routines. For example, if they have cameras, they can see if there are cars outside, how many and what type, at what times, and even the license plates. They could also conduct facial recognition on the people present, to see who is in your backyard at various times during the week. And it’s not just about cameras: Drones can also have microphones, LiDAR, Wi-Fi Scanners, and any other mounted sensor.

When industry uses sensors to collect information about you, industry all too often shares it with the government. For example, Ring collects visual and audio information from residents and passersby and has given access to law enforcement without warrants or user consent. All manner of government agencies are exploiting private surveillance networks and data brokers to circumvent protections for the people.

What Is an ARC?

ARCs are advisory groups that help the FAA make recommendations on rulemaking. A group of experts sit down for a period of time to discuss all possible aspects on a particular topic, such as operation, certification, maintenance, etc.  Then those suggestions are considered by the FAA when it writes the rules.

In June 2021, the FAA convened a new ARC on BVLOS.  EFF, the American Civil Liberties Union (ACLU), and the Electronic Privacy Information Center (EPIC) were some of the organizations participating, though this committee was largely dominated by industry.

On March 10, 2022, the FAA released the Final Report from the UAS BVLOS ARC, which contains the recommendations to which most other participants could agree—but EFF, ACLU, and EPIC voted as non-concurrent with the final report and submitted letters of dissent.

Why Did EFF Dissent With the Final Recommendations?

While we appreciate the effort that the FAA made to include civil society organizations like ours in the BVLOS ARC, the setup was not right for a full discussion of the civil liberties threats that these drones pose. The ARC was heavily dominated by industry, the target time to come up with a proposal was only six months, and none of the working groups were designed to address privacy issues thoroughly. Indeed, when we raised the topic of privacy, some industry groups pushed back to not even have a conversation on privacy.

We proposed some basic principles. We also suggested that another process be created with the right members (including diverse groups and communities) to thoroughly discuss and propose privacy rules.

Our main concerns with the ARC report are:

  • Voluntary privacy practices: Non-binding principles offer no protection for the public nor any real incentive for operators to comply, leaving the field wide open for abuse.
  • Lack of transparency:  These rules do not ensure the public can understand what’s flying over them, which is the first step towards holding industry accountable.
  • Lack of community engagement and control on invasive operations.
  • Lack of thorough considerations of negative uses and impacts of drones.

You can read our letter of dissent here and our white paper here.

Transparency

Transparency is essential to empower people and to maintain accountability, but the drone industry lacks that transparency. During the BVLOS ARC, we proposed a set of balanced transparency principles for industry that addressed the possible intrusions from government. One consideration is privacy (not secrecy) for drone operators as well.

For example, we suggested that when applying to operate a BVLOS Drone, the operator must report the types of sensors and their use. This should include:

  • Type and purpose of the drone operation: The operator should detail the purpose of its operation, so the public can understand its nature and also hold them accountable for mission creep.
  • Technical capabilities: This should include not only the operational capabilities of the aircraft (distance, air time, altitude, payload weight, etc.) but also the sensors on board, their capabilities, and the data collection in which they will engage. For example, if the drone carries cameras, the operator would have to disclose the power of any zoom lens, how that zoom is controlled (automated processes or remote operator), the camera’s resolution, the camera’s spectral range, and any live AI or analytics capabilities that it uses.
  • Data collected: The operator should detail all data collection that will occur during the operation. For example, if video will be collected, this would include information on when that video will be collected.
  • How that data is used: The operator should detail the collected data's intended use, for example, for navigational purposes, detection and avoidance of obstacles, infrastructure inspection, etc.
  • Data disclosure:  Who other than the operator can access the data, or with whom will it be proactively shared, and for what purpose?
  • Privacy impact assessment: The operator should provide an assessment of how the operation—with the sensors, data collection, and sharing that it involves—will affect communities over which this operation will take place, and what mitigations will address these issues.
So What Now?

Now we wait for the FAA to publish its rules on BVLOS drone operations. We hope that this time, the FAA will address our concerns and add some of our proposals.

You can read the white paper and letter of dissent we submitted to the FAA and the UAS BVLOS ARC.

Andrés Arrieta

TechCrunch Launches Lookup Tool to Help Android Users Know if Their Device Was Compromised by a Family of Stalkerware Apps

3 months ago

The scourge of stalkerware—malicious apps used by perpetrators of domestic violence to secretly spy on their victims—is not going unchallenged or unaddressed.

Antivirus makers are increasingly adding stalkerware to the list of apps their products detect on devices; victim support groups help people figure out whether their devices are infected and how to remove the apps; app stores are banning the software and pulling any advertising for it, and law enforcement is investigating and arresting stalkerware makers and their customers.

Now, in a welcome step to make it easier for people to detect a family of stalkerware apps investigated by researcher Zack Whittaker, online tech news site TechCrunch has launched a free spyware lookup tool that allows people to check if their Android device is on a leaked list of compromised devices. These apps can be covertly loaded onto devices or laptops, allowing perpetrators to monitor in real time users’ private messages, voicemails, internet browsing, passwords, and location data, all without their knowledge or consent.

Using a device other than the one that might be infected, users can enter certain identification numbers—IMEI or unique advertising ID numbers, both of which can be found on your phone—of the device suspected of having stalkerware into the tool, which will compare the numbers to a leaked list of devices compromised by this family of stalkerware apps. The list is made up of hundreds of thousands of Android devices infected by any one of a network of nine spyware apps prior to April.

The tool will tell users if their device identification numbers match, likely match, or don’t match the devices on the TechCrunch list. Users may then check the suspected phone for signs that a malicious stalkerware app is present—TechCrunch has a guide for finding evidence that your phone was compromised. The Clinic to End Tech Abuse (CETA), part of Cornell Tech, also has a guide. Once found, stalkerware apps can be removed from users’ devices.

Users whose phones are found to be compromised should put together a safety plan before removing stalkerware from their phones—removing the spyware likely triggers an alert to the person who planted it, which can create an unsafe situation. The Coalition Against Stalkerware has for victims of stalkerware.

The tool is the result TechCrunch investigation earlier this year revealing that at least nine  consumer-grade stalkerware apps, part of a massive, mostly-hidden stalkerware operation,  shared a common security flaw that is exposing the personal data of hundreds of thousands of Android device users.

The investigation found victims in virtually every country, with large groups in the U.S., Europe, Brazil, Indonesia, and India. TechCrunch contacted the company that appeared to be behind the operation to warn them about the security flaw, and received no answer. TechCrunch decided not to reveal the flaw for fear that it would be exploited, exposing even more data.

A break came June when a source provided TechCrunch with a cache of files dumped from the internal servers of one of the spying apps. The files included a list of every Android device that was compromised by any of the nine spyware apps. The list didn’t contain enough information for TechCrunch to identify or notify each device owner. But, after verifying the authenticity of the list, TechCrunch used the list to create the tool. 

The tool isn’t perfect—if users’ phones were infected with stalkerware after April, it won’t be on the list the tool uses. It will only tell users if their phones were infected with this class of stalkerware before April. The group is made of nine specific apps—if your device is infected with a stalkerware app other than those nine, the tool won’t have any matches.

Stalkerware is always adapting and changing, so survivors of domestic abuse and others for whom stalkerware is a concern face an ever-shifting threat landscape. TechCrunch’s research and newly-launched tool may help to provide peace of mind to a significant number of Android users. We hope that researchers continue to monitor the stalkerware ecosystem and raise the cost and difficulty of spying on Android devices with impunity.

Karen Gullo

Trans Youths Need Data Sanctuary

3 months ago

A growing number of states have prohibited transgender youths from obtaining gender-affirming health care. So these youths and their families must travel out-of-state for necessary health care. The states they visit are health care sanctuaries.

These states must also be data sanctuaries for transgender youths.

Earlier this year, the governor of Texas ordered state child welfare officials to launch child abuse investigations against parents whose transgender children received gender-affirming health care. We can expect such state officials to investigate parents who travel with their children out-of-state to receive this care. Those officials will seek evidence from the places where the care occurred.

To address this problem, California State Sen. Scott Wiener authored S.B. 107. EFF is proud to support this bill. In three ways, it would protect families coming to California for gender-affirming care for transgender youths, by limiting disclosure of their personal data to out-of-state entities seeking to punish this care.

First, California’s state and local police could not provide, to any individual or out-of-state agency, information regarding provision of such care in California.

Second, California’s health care providers could not release medical information about a person allowing a child to receive such care, in response to an out-of-state civil or criminal action against allowing such care.

Third, California’s superior court clerks could not issue civil subpoenas based on out-of-state laws against a person allowing a child to receive such care.

Data sanctuary is an important way for states to welcome all people. To be the safest place, a state must protect the data privacy of residents and visitors who are unfairly targeted by out-of-state entities.

For example, state and local police should not disclose personal data about immigrants to ICE. For this reason, among others, California law bars government agencies from disclosing their ALPR location data to federal and out-of-state officials. Likewise, pro-choice states should not send personal data to anti-choice states about people who visit to receive reproductive health care. Another California bill, S.B. 2091, would provide data sanctuary to these visitors. EFF supports this bill, too.

Adam Schwartz

Victory! South Carolina Will Not Advance Bill That Banned Speaking About Abortions Online

3 months 1 week ago

Since the U.S. Supreme Court overruled a half century of precedent supporting the constitutional right to abortion access, numerous states have moved towards making abortion illegal and restricting additional reproductive health services.

In South Carolina, Republican state Senators Richard Cash, Rex F. Rice and Daniel B. Verdin III introduced Senate Bill 1373 seeking to criminalize abortions. The bill would have also made it a crime to “aid, abet, or conspire with someone to procure an abortion,” which includes “providing information to a pregnant woman…by telephone, internet, or any other mode of communication” and “hosting or maintaining an internet website, providing access to an internet website, or providing an internet service purposefully directed to a pregnant woman…that provides information on how to obtain an abortion.”   

EFF joined others in opposition to the bill, including Advocates for Youth, Center for Democracy & Technology, Chamber of Progress, EducateUS, LGBT Tech, National Women’s Law Center, PEN America, and SIECUS: Sex Ed for Social Change. Earlier this month, Governor Henry McMaster said the bill’s restrictions on speech are “not going to see the light of day.”

“Everyone has a constitutional right of the First Amendment to say things, to speak,” McMaster said, according to The State (Columbia, SC).

This is good news. But this bill is based on a model from the National Right to Life Coalition, which is looking to introduce it other states. It represents a multi-faceted attack on reproductive rights and freedom of expression which exacerbates the existing challenges that pregnant people face when trying to find reliable information about safe reproductive health care. By making it a felony to discuss abortions online, the model bill’s drafters have sought to ensure that people seeking an abortion are faced with repression everywhere they look – it would become illegal to post information online about abortions, it would become illegal to exchange e-mails or online messages with a pregnant person in South Carolina seeking an abortion, and it would become illegal to undergo the termination itself. The model bill also would probably restrict content that relates to wider reproductive health care, such as miscarriages and other conditions.

Moreover, the bill, should it pass, would make whatever state that passes it the national censors for speech online. This would violate the First Amendment rights of individuals to communicate freely online – and the rights of online platforms to host that speech. Despite Section 230 protections exempting platforms from liability if their users post speech which violates criminal laws, and additional protections against the government telling platforms what they must publish, companies are likely to crack down on speech that could run afoul of the bill. This could lead to restrictions on access to telemedicine counseling services and censorship of direct messages between patients and caregivers. Companies would seek to avoid the strict criminal liability that the bill placed on them for publishing content in the prohibited categories. The bill may also pressure platforms into over-censoring abortion-related content due to fears of litigation or prosecution for hosting the information on their sites.

If any state passes this bill into law, it will have a devastating impact on reproductive rights and freedom of expression across the whole country. By restricting the exchange of abortion-related information online, it is possible that the bill will lead to higher rates of maternal mortality, undesired pregnancies, and sexually transmitted diseases. We applaud South Carolina for rejecting the legislation and urge states not to take up any future bills like it.

Paige Collings

Federal Judge: Invasive Online Proctoring "Room Scans" Are Unconstitutional

3 months 1 week ago

Online proctoring companies employ a lengthy list of dangerous monitoring and tracking techniques in an attempt to determine whether or not students are potentially cheating, many of which are biased and ineffective. This week, one of the more invasive techniques—the “room scan”—was correctly deemed unconstitutional by a federal judge. “Room scans” are a common requirement in proctored exams where students are forced to use their device’s camera to give a 360-degree view of everything around the area in which they’re taking a test. Often, this is a personal residence, and frequently a private space, like a bedroom. (A demonstration of a room scan by Proctorio can be seen here.) 

We have criticized room scans as well as many other dangerous aspects of online proctoring. In addition to these room scans, remote proctoring tools often record keystrokes and use facial recognition to supposedly confirm whether the student signing up for a test is the one taking it; they frequently include gaze-monitoring or eye-tracking and face detection that claims to determine if the student is focusing on the screen; they gather personally identifiable information (PII), sometimes including scans of government-issued identity documents; and they frequently collect device logs, including IP addresses, records of URLs visited, and how long students remain on a particular site or webpage. These automated tools are hugely privacy invasive and can easily penalize students who don’t have control over their surroundings, or those with less functional hardware or low-speed Internet, as well as students who, for any number of reasons, have difficulty maintaining “eye contact” with their device. For these many reasons, schools should use other means beyond invasive remote proctoring to ensure exam integrity, and at the very least, proctoring companies should collect and retain the minimally required amount of data to do so. Often, the users of these tools are unable to opt out of data collection, and by collecting all of this information, proctoring tools endanger young people’s privacy.

In this case, a student enrolled in a public university (Cleveland State University) learned he would be required to undergo a room scan shortly before an exam. The court decided correctly that the room scan  was an unreasonable search under the Fourth Amendment. As the court recognized, room scans provide the government with a window into our homes—a space that “lies at the core of the Fourth Amendment’s protections” and long-recognized by the Supreme Court as private. Traditionally, the Fourth Amendment requires a warrant before the government can search in our homes, and that includes searches by government institutions like a state-run university. There are few exceptions to this requirement, and none of the justifications offered by the university—including its interests in deterring cheating and its assertion the student may have been able to refuse the scan—sufficed to outweigh that requirement in this case. 

Over the last few years, students have rarely had the option to opt-out of using remote proctoring tools, and have been essentially coerced into allowing a third-party, and their school, to collect and retain sensitive, private data about them if they want to pass a class. This opinion, though it is not binding on other courts, is an important one. Any student of a state school hoping to push back against room scans in particular could now cite it as persuasive precedent. As of yet, however, there has been no judgment or injunction, which means what specifically Cleveland State will have to do is not fully determined. 

The university failed to show room scans like this are "truly, and uniquely, effective at preserving test integrity." We hope schools will recognize, in part thanks to this decision, that this element of remote proctoring is both unnecessary and invasive, and, at least for state schools, unconstitutional. All schools should cease its use. 

Jason Kelley

How YouTube’s Partnership with London’s Police Force is Censoring UK Drill Music

3 months 1 week ago

Originating from the streets of Chicago, drill music is a creative output of inner-city Black youths. It is defined by real life experiences and perspectives, and whilst drill rappers often document gang-related conflict and anti-establishment narratives in their lyrics and music videos, the rap genre is a crucial mouthpiece of artistic and cultural expression. However, London’s police force—the Metropolitan Police, or the Met—have argued that the genre is partly responsible for the rise in knife crime across the UK’s capital, and have sought to remove drill music from online platforms based on the mistaken, and frankly racist, belief that it is not creative expression but a witness statement to criminal activity.

It is concerning, therefore, that in 2018 streaming platform YouTube started an “enhanced partnership” with the Met, which has since facilitated a pervasive system of content moderation for drill rappers in the UK. This partnership of state and corporate power has enabled the Met to advance their previous efforts to censor drill music, most notably since 2015 when the force launched Operation Domain to monitor “videos that incite violence” on YouTube. In June 2019, Operation Domain was replaced by Project Alpha, which involves police officers from gang units operating a database of 34 different categories, including drill music videos, and monitoring sites for intelligence about criminal activity. According to Vice, 1,006 rap videos have been included on the database since 2020 and a heavily redacted Met document notes that Project Alpha aimed to carry out “systematic monitoring or profiling on a large scale,” with men aged between 15 to 21 the primary focus.

YouTube’s partnership with London’s police includes giving Project Alpha officers “trusted flagger” status to “achieve a more effective and efficient process for the removal of online content”—which the Met has called “a global first for law enforcement.” When sites cooperate with government agencies in these systems of content moderation, it leaves the platform inherently biased in favor of the government’s positions and gives law enforcement outsized influence to control public dialogue, suppress dissent, and blunt social movements. It also pressures platforms to moderate speech they may not otherwise have chosen to moderate. 

Since November 2016, the Met made 579 referrals for the removal of “potentially harmful content” from social media platforms and 522 of these were removed, predominantly from YouTube. More specifically, in 2021 the Met referred 510 music videos to YouTube for removal and the platform removed 96.7%, and a report from the New York Times notes that YouTube removed 319 videos in 2020 following requests from the police force. At the same time, popular YouTube channels have advised artists to censor content that could be deemed offensive to avoid potential removal once the video goes live.

The Met has refuted accusations that Project Alpha suppresses freedom of expression. But the collaboration with YouTube has facilitated a punitive system of censorship that contravenes data protection, privacy, and free expression rights. And it’s not the first of its kind. In 2012, Operation New Hampshire was established by Newham Council to “examine” more than 500 music videos and successfully secure the removal of 76 from YouTube because of their “explicit use of threats.” Newham Council created a special unit to monitor the videos that it believed could “recruit new gang members” and when asked what the benefits of such a monitoring system are, they replied that “the successful outcome of removal of the videos is self-evident.”

Law enforcement have a history of linking music to violence and their “street illiteracy” exacerbates the idea that drill music depicts real-life actions that artists have seen or done, rather than an artistic expression communicated through culturally-specific language that police are seldom equipped to decode. Similar trends are also evident in countries like the United States where New York City mayor Eric Adams recently blamed drill for violent crime in the city and called for the removal of drill videos from social media. As such, the flags raised by the police to social platforms are completely one-sided, rather than with experts supporting both sides. Social media platforms like YouTube should take more effective voluntary actions against partnering with law enforcement and ensure that all individuals can share content online without their voices being censored by government authorities.

Paige Collings

Indonesia’s New Draft Criminal Code Restrains Political Dissent

3 months 1 week ago

Even in the face of strong public protest over a set of proposed revisions to criminal laws that infringe Indonesians’ free expression rights, the Indonesian Ministry of Law and Human Rights last month sent to the Parliament a new draft of the Criminal Code (CC) that threatens to further chill political dissent and civic participation. In particular, it contains provisions that criminalize defamation and insult of public officials, including the President and members of the government.

Indonesians deserve a reformed CC that protects fundamental rights to express their opinions, including criticizing and disagreeing with elected officials and the government.  The new draft instead robs people of these rights. EFF joins its global partners in calling on the Indonesian Parliament to hold inclusive and meaningful public consultations and revise the new draft CC in line with Indonesia’s international human rights obligations. 

Lack of Meaningful Public Discussions

The CC, a law with Dutch colonial legacy, has been in reform since 1958. One of the latest drafts was introduced in 2019, when the government announced that a new code would be adopted soon—without ever making it public. That sparked protests, forcing the government to release the draft code, which in turn prompted massive demonstrations across Indonesia about the code’s  infringement on free expression. The public was concerned with a number of provisions, ranging from criminalizing adultery and blasphemy to the impact on minorities and civil society. The government did not move forward with that draft.

The Indonesian government now has a track record of failing to hold public consultations around amendments to the CC. In June, it announced a new draft CC, and again didn’t release it publicly. After pressure from civil society, the government made the draft of 632 articles public on July 6. It has not organized any inclusive public discussions,  instead claiming  to have met the  requirement of public participation and awareness raising through  so-called socialization sessions in only 12 locations in Indonesia.

The Government has been pushing for a swift adoption of the full draft of the controversial new CC, even though it was only made public  at  the beginning of July. As the draft is already in the Parliament, the only remaining allegedly public forum about it is  a  question and answer session between  lawmakers  and the Government, in which the public is not allowed to participate. This means that Indonesians and local civil society organizations haven't had any meaningful avenue to raise their concerns, provide input, and participate in shaping one of the most important and consequential laws in Indonesia.

In the beginning of August, Indonesian  President Joko Widodo asked the government to seek public opinion about  the draft CC, before it is adopted, to raise awareness. This is an  important step in the right direction. 

No Criminalization of Defamation 

Among the most problematic provisions in the new draft CC are those establishing criminal punishment, including imprisonment, for defamation and insults against the President and Vice President, the government, and public authorities and State institutions. Defamation laws usually are aimed at protecting individuals from reputational harm. Civil defamation laws allow  injured parties to sue and ask for an apology or seek monetary compensation.  Criminal defamation laws, on the other hand, are used as a hammer to silence people and disproportionately restrict freedom of expression.

While Indonesia is trying to turn a page on its colonial past, these provisions were previously used to forbid people from expressing their dissent and disappointment towards the authorities. Moreover, the provision on defamation and insult against the President and Vice President was historically used to protect the dignity of the Queen, also known as lese majeste. The Indonesian Constitutional Court has declared this article unconstitutional, claiming it is a “colonial legacy,” that violated freedom of expression, access to information, and the principle of legal certainty. Genoveva Alicia Karisa Shiela Maya, researcher at the Institute for Criminal Justice Reform, told EFF: 

 “It seems that the Government has read this judgment differently, as they continuously try to defend the existence of this article in the draft. In the recent development of the Bill of Penal Code draft, the Government has provided a longer elucidation for this article (now, Article 218) which gives guidelines to differentiate between “defamation” and “critics”’.

The Office of the UN High Commissioner for Human Rights (OHCHR) criticized similar lese majeste laws in Thailand, highlighting the chilling effect on free expression and political dissent in the country. Yet, criminal defamation and lese majeste laws are being used against Indonesian journalists who cover issues of public interest involving government officials or members of the Indonesian royal family.

For example, in March 2020, Mohamad Sadli, the chief editor of liputanpersada.com, was sentenced to two years of imprisonment for an opinion piece criticizing the local government’s road construction project. Amnesty International reported about the arrests last year of at least seven students of Universitas Sebelas Maret in Surakarta in Central Java after they held posters during Widodo’s  campus visit, appealing to  the President to support local farmers, address corruption, and prioritize public health during the pandemic. These and many other examples illustrate that codification of criminal punishment for defamation and insult against public officials will further chill free expression and political dissent in Indonesia.

To be sure, international human rights laws recognize the right to be free of attacks on one’s reputation. For example, Article 12 of the UN’s 1948  Universal Declaration of Human Rights provides that “no one shall be subject to [...] attacks upon his honor and reputation.” Article 17 of the 1966 the International Covenant on Civil and Political Rights (ICCPR) protects against “unlawful attacks on his honor and reputation,” and Article 19 of the ICCPR enlists “respect of the rights or reputations of others” as a lawful ground for restricting freedom of expression.

However, while freedom of expression is not absolute, international human rights standards establish that freedom of expression and opinion are essential for any society,  and only necessary and narrowly drawn restrictions on it should be imposed.

The UN Human Rights Committee’s General Comment 34 calls for decriminalizing defamation, noting that “the application of the criminal law should only be countenanced in the most serious of cases and imprisonment is never an appropriate penalty.” It further states that defamation laws, especially criminal defamation laws, should consider truth as a defense, and “a public interest in the subject matter of the criticism should be recognized as a defense.”

No Criminal Punishment for Criticizing Public Officials 

International human rights standards require showing particular restraint in restricting criticism of public figures and heads of state. The UN Human Rights Committee’s General Comment 34 notes that states “should not prohibit criticism of institutions, such as the army or the administration.” It also states that unlawful untrue statements about public officials published in error, without actual malice, should not be penalized.

The 2021 Joint Declaration on Politicians and Public Officials and Freedom of Expression underlined that political speech should enjoy a high level of protection, even the speech that public officials may find offensive or unduly critical.  Finally, the UN Special Rapporteur on Freedom of Expression 2022 report underlined that public officials “should expect a higher degree of public scrutiny and be open to criticism.”

During the last UN Human Rights Council’s Universal Periodic Review (UPR) review cycle for Indonesia in 2017,  a number of recommendations focused on revisiting or repealing the problematic provisions in draft CC. However, the Indonesian government only doubled down on punishing online defamation in the Information and Electronic Transactions Law (IET), which also  provides for a criminal penalty of up to six  years of imprisonment.  This provision does not contain a public interest exception and disproportionately limits the right to expression and opinion.

As Damar Juniarto, SAFEnet Executive Director, told EFF:

 “Indonesia retains most of the defamation articles in the Criminal Code and ITE Law, even though Indonesia has ratified the ICCPR. Even more, the new draft Criminal Code contained several articles relating to blasphemy and inserting provisions that criminalize defamation and insult of public officials, including the President and the government. This situation puts freedom of expression in Indonesia under attack and in danger.”

According to SAFEnet’s 2021 Digital Rights Situation Report, there were more than 30 criminal cases involving 38 victims brought under  problematic IET articles, and nearly 60% of all digital attacks in Indonesia targeted  human rights defenders, activists, academics and journalists.  Two of the criminal  cases involved two researchers from Indonesia Corruption Watch, who uncovered links between the head of Presidential staff and the senior management of  a company responsible for producing and marketing an allegedly COVID-19 therapeutic drug in Indonesia. Another defamation case involved two human rights defenders, Haris Azhar, director of Lokataru, and Fatia Maulidiyanti, director of KontraS, who exposed a high-ranking senior cabinet minister’s involvement with the problematic gold mining business in the conflict area in Papua.  Moreover, heads of Greenpeace Indonesia were reported to the police for having criticized Indonesia’s President about deforestation in a press release. 

Conclusion 

The human rights situation in Indonesia has been further backsliding in the last decade. The new draft CC introduces new avenues for further encroachment on freedom of expression, freedom of assembly, and access to information. Indonesians deserve better, and Indonesian authorities should recall the new draft CC from the Parliament, organize inclusive and meaningful public discussions, and draft a new CC that complies with international human rights standards. 




Meri Baghdasaryan

Victory: Government Finally Releases Secretive Court Rulings Sought By EFF

3 months 1 week ago

More than seven years after Congress mandated it and EFF sued to pry them loose, the government released seven heavily-redacted but previously classified rulings from the Foreign Intelligence Surveillance Court that shed new light on how the secret court interprets key provisions of the laws that authorize mass surveillance.

The Office of the Director of National Intelligence (ODNI) released the redacted versions as required by the USA FREEDOM Act of 2015. More details about the rulings are below. But before diving in, it’s important to understand the significance of the disclosure given the government’s long-standing refusal to make these rulings public. You can also read previously released opinions here.

The government’s concession came after years of pressure, activism, and litigation from EFF and other groups that sought to hold the Executive Branch to the law.

In addition to reforming the country’s mass surveillance programs, USA FREEDOM required the government to release all significant opinions and orders of the FISC and the secret appeals court, the Foreign Intelligence Surveillance Court of Review (FISC-R). EFF fought to include this provision in the law with the hope that the disclosures would finally allow the public, including civil rights and civil liberties groups, as well as legal scholars, to access court rulings that determine people’s rights to be free from surveillance.

Disclosure was essential because after 9/11, the FISC’s role expanded. Congress originally created the FISC to operate as a warrant court, approving government requests for surveillance on individualized foreign targets. But after 2001, on top of continuing to approve individual surveillance requests, the court became a “meta-arbiter” that approved the government’s mass surveillance programs or the procedures used to operate them. Some of the FISC’s ruling authorized programs that illegally scooped up the data of millions of Americans and people outside the United States

When Congress passed USA FREEDOM, we expected that the government would move quickly to release these opinions. Instead, the government resisted transparency. First, the government argued that USA FREEDOM’s transparency mandate did not apply to FISC rulings issued prior to the law’s passage in 2015.

The government’s position defied logic and the clear text of USA FREEDOM. So in 2016, EFF sued under the Freedom of Information Act to force the government to disclose all significant FISC opinions. The government ultimately agreed to search for FISC opinions issued between 2003 and 2015, but continued to take the position that USA FREEDOM did not mandate their release.

EFF’s litigation resulted in the disclosure of more than 70 previously secret FISC rulings, including details of a provider’s Kafka-esque fight against a mass surveillance order, rulings showing that FISC judges have difficulty understanding the government’s surveillance activities and whether they are legal, and that the government misuses individualized FISA surveillance orders.

The government, however, successfully convinced a court that it could keep six FISC opinions entirely secret on the grounds that their disclosure would harm national security. After ODNI released the opinions last week, the government confirmed that six of the seven opinions made public were the ones kept secret in EFF’s lawsuit. The government also said that the seventh opinion released last week should have been disclosed to EFF in response to our suit, meaning it remained secret for years for no reason.

Last week’s disclosure is a major victory for transparency and a vindication of USA FREEDOM. The government’s concession came after years of pressure, activism, and litigation from EFF and other groups that sought to hold the Executive Branch to the law. It should not have taken seven years for the government to disclose these rulings.

Yet there is still more work to be done regarding FISC transparency. Although the government states that the seven opinions represent the last of the court’s historic, significant opinions, we disagree. Because the opinions were issued between 2003 and 2015, there are decades of other FISC rulings that occurred in the years since Congress created the court in 1978. So as Congress debates whether to renew the government’s mass spying powers, it should continue to push for greater FISC transparency going back to the court’s origins.

Digging into the newly released FISC rulings

The seven newly released opinions and orders are heavily redacted, but they reveal new details about the FISC’s resolution of several different legal and technical questions, which often resulted in the court approving new ways for the government to access people’s private data.

Court expands FBI access to personal data obtained under FISA

For example, an October 8, 2008 order expanded the number of FBI officials who could access U.S. person’s data obtained via the government’s surveillance programs—or what EFF calls a backdoor search. The court ultimately approved giving FBI officials within the National Counter Terrorism Center access to FISA materials, but only after going through some mental gymnastics to find that doing so would not run afoul of legal limits designed to limit disclosure of personal information swept up by the surveillance.

The court found that even though the FBI retained the information because it suspected domestic crime, rather than for a foreign intelligence purpose, the additional access didn’t violate FISA’s minimization procedures. This was kosher, according to the court, because the FBI counter terrorism officials had a foreign intelligence purpose, rather than a law enforcement purpose. Of course, the only reason the FBI retained people’s data in the first place was to pursue domestic law enforcement investigations, calling into question the court’s reasoning. But as a practical matter, the court approved broadening the number of people within the FBI who could access people’s data via the FBI’s backdoor searches.

Another set of opinions show the true breadth of a since-sunsetted provision of FISA, known as Section 215, that officials used to conduct the telephone records mass surveillance program.

Opinions show breadth of 215’s misuse

An August 20, 2008 opinion approved the government’s use of other unique phone identifiers, rather than traditional phone numbers, for purposes of its bulk collection of phone call records. The telephone records surveillance program was illegal in multiple respects, as it stretched FISA’s terms beyond meaning and also violated the First and Fourth Amendments by allowing the government to map people’s associations and invaded their privacy. Yet the court in 2008 had little trouble in permitting the government to use unique International Mobile Subscriber Identity (IMSI) numbers for its surveillance.

And in another order (the government redacted the date), the FISC used Section 215 to authorize the government’s ongoing collection of information from a third party on a daily basis. It’s not clear what type of data or third party was subject to the demand, but the order notes in passing that the government had sought similar information from the third party via a National Security Letter (NSL), an administrative subpoena issued by the FBI that is unconstitutional because it can gag recipients from saying anything about the legal demand. Almost comically, the FISC order describes NSLs as “non compulsory,” implying that recipients are free to disregard those FBI demands. That would be news to the many NSL recipients, including EFF clients, who have long fought against NSL gag orders.

Both 215 orders show why the provision was ripe for abuse by the government. It’s one reason why EFF continues to call for further surveillance reforms, even after Congress failed to renew Section 215 in 2020.

Redactions make understanding other opinions difficult

Redactions make understanding the importance of some of the newly released opinions difficult, though two opinions (here and here) appear to be related to another opinion the government released to EFF in 2018 as part of our FOIA litigation. We cannot connect the opinions directly to that earlier release, but as we wrote at the time, those opinions showed how even FISC judges had difficulty getting straight answers from the government about the breadth of its spying.

A March 5, 2010 order involves a nitty-gritty analysis of whether the surveillance sought by the government concerned wire or radio communications, as FISA defines “wire communication” but does not define “radio communication.” The court ordered the government to submit legal briefing as well as detailed technical descriptions about its surveillance to help it resolve the definitional issue.

Finally, the seventh released order (the government redacted the date) appears to be a classic FISA warrant that also included a technical assistance order.

Related Cases: Jewel v. NSA
Matthew Guariglia

New Proposal Brings Us a Step Closer to Net Neutrality

3 months 1 week ago

Right now, Americans live in a country where the companies that control our access to the internet face little-to-no oversight. In most states, these companies can throttle your service—or that of, say, a fire department fighting the largest wildfire in state history. They can block a service they don’t like. In addition to charging you for access to the internet, they can charge services for access to you, driving up costs artificially. Making matters worse, most of us have little choice of home broadband providers, and many have only one—a perfect monopoly. We are badly in need of a federal response in the form of net neutrality protections. Thankfully, a new bill brings us that much closer.   

The Net Neutrality and Broadband Justice Act and its companion in the House of Representatives aim to put a stop to these abuses by reclassifying broadband internet services as telecommunication services under Title II of the Communications Act, thereby giving the Federal Communications Commission (FCC) the authority they need to reinstate net neutrality and lay down equitable rules of the road once more.

Take Action

Tell the Senate to Fully Staff the FCC

While the bill is narrow, what it does is important: it prevents the FCC from reclassifying broadband internet services again in the future. The bill stops the back and forth we’ve experienced, with one FCC instating net neutrality rules, only for another to strip away those protections. By deciding with a congressional mandate that broadband internet services fall firmly under Title II, the American public will have clarity on what the FCC can do, what users’ protections are, and who they can go to when ISPs cause harm.

This bill also sends a clear signal to the FCC and the rest of Congress that it is time to reinstate net neutrality protections. By designating broadband internet services as common carriers under the Telecommunications Act, a necessity of modern life will finally be recognized as such under law. We live in a world where internet access is no longer a luxury “information service,” but rather a medium of communications as important as the telephone. And just as your telephone provider should not be able to decide which calls you can make and take, ISPs should not have control over the internet sites and apps you avail yourself of. Specifically, the FCC could reinstate net neutrality rules that forbid blocking or throttling internet traffic selectively, and stop ISPs from charging websites to deliver their data faster (paid prioritization) to users. Net neutrality is vital because, although nearly 80% of Americans view broadband access to be as important as water and electricity, many Americans do not have a choice of high-speed internet provider. Americans pay more for worse internet than those in almost any other similarly-situated nation.

A return to net neutrality will help build competition to provide the fast, reliable, accessible service every American deserves to truly participate in our increasingly digital economy and society. And competition will incentivize today’s monopolistic ISPs to stop using the profits we pay them to buy content, and instead invest to provide better services beyond what would be decreed via net neutrality and Title II.

While this bill is a good step, the FCC still is not fully staffed, meaning that even with the authority granted by this new law, new rules would still be stalled. Giving the FCC the power to set smart rules for ISPs won’t matter if they are unable to exercise that power.  Without Gigi Sohn’s confirmation as the fifth FCC commissioner, the FCC will  not have the votes to pass net neutrality rules. In fact, without a fifth commissioner, the commission can’t affirmatively enforce any of the consumer protections under Title II.

We urge Congress to pass the Net Neutrality and Broadband Justice Act, improving the quality of America’s broadband services, and to fully staff the FCC by confirming Gigi Sohn.

Take Action

Tell the Senate to Fully staff the FCC

Chao Liu

Google’s Scans of Private Photos Led to False Accusations of Child Abuse

3 months 1 week ago

Internet users’ private messages, files, and photos of everyday people are increasingly being examined by tech companies, which check the data against government databases. While this is not a new practice, the public is being told this massive scanning should extend to nearly every reach of their online activity so that police can more productively investigate crimes related to child sexual abuse images, sometimes called CSAM. 

We don’t know much about how the public gets watched in this way. That’s because neither the tech companies that do the scanning, nor the government agencies they work with, share details of how it works. But we do know that the scanning is far from perfect, despite claims to the contrary. It makes mistakes, and those mistakes can result in false accusations of child abuse. We don’t know how often such false accusations happen, or how many people get hurt by them. 

The spread of CSAM causes real harms, and tech companies absolutely should work on new ways of fighting it. We have suggested some good ways of doing so, like building better reporting tools, privacy-respecting warning messages, and metadata analysis.   

An article published yesterday in the New York Times reports on how Google made two of these false accusations, and the police follow-up. It also highlights Google’s refusal to correct any of the damage done by its erroneous scans, and the company’s failed human review processes.  This type of scanning is increasingly ubiquitous on tech products we all use, and governments around the world want to extend its reach even further, to check even our most private, encrypted conversations. The article is especially disturbing, not just for the harm it describes to the two users Google falsely accused, but also as a warning of potentially many more such mistakes to come. 

Google’s AI System Failed, And Its Employees Failed Too 

In February of last year, Google’s algorithms wrongly flagged photos taken by two fathers in two different states as being images of child abuse. In both cases, the fathers—one in San Francisco, one in Houston—had small children with infections on their genitals, and had taken photos of the area at the request of medical professionals. 

Google’s algorithms, and the employees who oversee them, had a different opinion about the photos. Without informing either parent, Google reported them to the government. That resulted in local police departments investigating the parents. 

The company also chose to perform its own investigation. In the case of Mark, the San Francisco father, Google employees looked at not just the photo that had been flagged by their mistaken AI, but his entire collection of family and friend photos. 

Both the Houston Police Department and the San Francisco Police Department quickly cleared the fathers of any wrongdoing. But Google refused to hear Mark’s appeal or reinstate his account, even after he brought the company documentation showing that the SFPD had determined there was “no crime committed.” Remarkably, even after the New York Times contacted Google and the error was clear, the company continues to refuse to restore any of Mark’s Google accounts, or help him get any data back. 

Google’s False Accusations Cause Real Harm

Google has a right to decide which users it wants to host. But it was Google’s incorrect algorithms, and Google’s failed human review process, which caused innocent people to be investigated by the police in these cases. It was also Google’s choice to destroy without warning and without due process these fathers’ email accounts, videos, photos, and in one case, telephone service. The consequences of the company’s error are not trivial. 

We don’t know how many other people Google has wrongly accused of child abuse, but it’s likely many more than these two. Given the massive scope of the content it scans, it could be hundreds, or thousands. 

Mark and Cassio, the two fathers wrongly flagged by Google, were accused within one day of each other in February 2021. That could be coincidental timing, or it could suggest that one or more flaws in Google’s system—either flaws in the AI software, or flaws in the human review process—were particularly manifest at that time. 

Google’s faulty CSAM scans caused real harm in these cases, and it’s not hard to imagine how they could be more harmful in other cases. Once both Google employees and police officers have combed through an accused parent’s files, there could be consequences that have nothing to do with CSAM. Police could find evidence of drug use or other wrongdoing, and choose to punish parents for those unrelated crimes, without having suspected them in the first place. Google could choose to administer its own penalties, as it did to Mark and Cassio. 

Despite what had happened to them, both Mark and Cassio, the Houston father, felt empowered to speak out to a reporter. But systems like this could report on vulnerable minorities, including LGBT parents in locations where police and community members are not friendly to them.  Google’s system could wrongly report parents to authorities in autocratic countries, or locations with corrupt police, where wrongly accused parents could not be assured of proper due process. 

Governments Want More Unaccountable CSAM Scans

Google isn’t the only company doing scans like this. But evidence is mounting that the scans are simply not accurate. A Facebook study on 150 accounts that were reported to authorities for alleged CSAM found that 75% of the accounts sent images that were “non-malicious” and were sending images for reasons “such as outrage or poor humor.” LinkedIn found 75 accounts that were reported to EU authorities in the second half of 2021, due to files that it matched with known CSAM. But upon manual review, only 31 of those cases involved confirmed CSAM. (LinkedIn uses PhotoDNA, the software product specifically recommended by the U.S. sponsors of the EARN IT Bill.) 

In the past few years, we’ve seen governments push for more scanning. Last year, Apple proposed a form of on-device scanning on all of its devices that would search user photos and report matches to authorities. That program was scuttled after a public outcry. This year in the U.S., the Senate Judiciary Committee considered and passed the EARN IT Act, which would have opened the door for states to compel companies to use CSAM scanners. (The EARN IT Act hasn’t been considered in a floor debate by either house of Congress.) The European Union is considering a new CSAM detection law as well. The EU proposal would not only search for known and new abuse images, it would use AI to scan text messages for “grooming,” in an attempt to judge abuse that might happen in the future. 

Earlier this month, EU Commissioner Ylva Johnasson wrote a blog post asserting that the scanners they propose to use have accuracy rates “significantly above 90%.” She asserts “grooming” detection will be 88% accurate, “before human review.” 

These accuracy rates are nothing to brag about. If billions of private messages in the EU are scanned with a false positive rate of “above 90%,” it will result in millions of falsely flagged messages. This avalanche of false positives will be a humanitarian disaster even in wealthy democracies with rule of law—to say nothing of the autocracies and backsliding democracies, which will demand similar systems. Defenders of these systems point to the very real harms of CSAM, and some argue that false positives–the kind that result in erroneous reports like those in the article–are acceptable collateral damage.  

What we’re being asked to accept here is nothing less than “bugs in our pockets.” Governments want companies like Google and Apple to constantly scan every digital space we have, including private spaces. But we’re seeing the results when companies like Google second-guess their own users’ family lives—and even second-guess the police. 

The Solution is Real Privacy

At EFF, we’ve been fighting against spying on peoples’ digital lives for more than 30 years. When police want to look at our private messages or files, they should follow the 4th Amendment and get a warrant. Period. 

As for private companies, they should be working to limit their need and ability to trawl our private content. When we have private conversations with friends, family, or medical professionals, they should be protected using end-to-end encryption. In end-to-end encrypted systems, the service provider doesn’t have the option of looking at the message, even if they wanted to. Companies should also commit to encrypted backups, something EFF has requested for some time now.  

The answer to a better internet isn’t racing to come up with the best scanning software. There’s no way to protect human rights while having AI scan peoples’ messages to locate wrongdoers. The real answer is staring us right in the face: law enforcement, and elected leaders, that work to coexist with strong encryption and privacy, not break them down. 

Joe Mullin

Code, Speech, and the Tornado Cash Mixer

3 months 1 week ago

The U.S. Office of Foreign Assets Control (OFAC)'s placement of “Tornado Cash” as an entity on the Specially Designated Nationals (SDN) sanction list raises important questions that are being discussed around the world. OFAC explained its sanction by saying “Tornado Cash (Tornado) is a virtual currency mixer that operates on the Ethereum blockchain and indiscriminately facilitates anonymous transactions by obfuscating their origin, destination, and counterparties, with no attempt to determine their origin,” and, therefore, is a “threat to U.S. national security.” 

The issues EFF is most concerned about arise from speech protections for software code and how they relate to government attempts to stop illegal activity using this code. This post outlines why we are concerned about the publication of this code in light of what OFAC has done, and what we are planning to do about it.   

Background

On August 8, acting under Executive Order 13694, OFAC added something it called “TORNADO CASH (a.k.a. TORNADO CASH CLASSIC; a.k.a. TORNADO CASH NOVA)” to the SDN list, along with a long list of digital currency wallet addresses. Once an entity is on the sanctions list, U.S. persons and businesses must stop “dealing” with them, including through transfers of money or property.

According to the Treasury Department, the Tornado Cash mixer has been used to launder Ethereum coins, including coins worth millions of U.S. dollars from the Lazarus Group, a Democratic People’s Republic of Korea (DPRK) state-sponsored hacking group, as well as the proceeds of several ransomware outfits. We have no reason to doubt this claim, and it is legitimately serious. Like many other kinds of computer programs (as well as many other tools), the Tornado Cash smart contract on the Ethereum blockchain can be, and indeed is, used for legal activities, but it is also used for illegal ones. According to Chainanalysis’ study of mixers generally, known “illicit [wallet] addresses accounted for 23 percent of funds sent to mixers this year, up from 12 percent in 2021.”

Confusingly, however, the name “Tornado Cash” could refer to several different things, creating ambiguity in what exactly is sanctioned. Tornado Cash “Classic” and “Nova” refer to variants of the software that exist in both source code form on GitHub and running on the blockchain. Tornado Nova is a beta version, with functionality apparently limited to 1 ETH/transaction.

Meanwhile, the OFAC press release quoted above refers to “Tornado Cash” as both an anonymity-enhancing technology and a sanctioned entity. “Tornado Cash” is also the name of: the underlying open source project that developed and published the code on GitHub; the name of this autonomous mixer software that resides as a smart contract (application) running on the Ethereum network; the URL of the tornado.cash website (listed by name on the SDN); and could be considered a name of an entity consisting of some set of people involved with the mixer. OFAC did not identify or list any people involved with the mixer as sanctioned by name. While the OFAC listing is ambiguous, Coin Center has drilled down on what it believes is and is not a sanctionable entity in the Tornado Cash situation, distinguishing between an entity and the software itself.  

EFF has reached out to OFAC to seek more clarity on their interpretation of the sanctions listing, especially the scope of what OFAC means by “Tornado Cash,” and we hope to hear back soon.

EFF Representation of a Computer Science Professor’s Right to Publish Code

EFF’s most central concern about OFAC’s actions arose because, after the SDN listing of “Tornado Cash,” GitHub took down the canonical repository of the Tornado Cash source code, along with the accounts of the primary developers, including all their code contributions. While GitHub has its own right to decide what goes on its platform, the disappearance of this source code from GitHub after the government action raised the specter of government action chilling the publication of this code.  

In keeping with our longstanding defense of the right to publish code, we are representing Professor Matthew Green, who teaches computer science at the Johns Hopkins Information Security Institute, including applied cryptography and anonymous cryptocurrencies. Part of his work involves studying and improving privacy-enhancing technologies, and teaching his students about mixers like Tornado Cash. The disappearance of Tornado Cash’s repository from GitHub created a gap in the available information on mixer technology, so Professor Green made a fork of the code, and posted the replica so it would be available for study. The First Amendment protects both GitHub’s right to host that code, and Professor Green’s right to publish (here republish) it on GitHub so he and others can use it for teaching, for further study, and for development of the technology.

Code is Speech is a Core Principle

For decades, U.S. courts have recognized that code is speech. This has been a core part of EFF’s advocacy for the computer science and technical community, since we established the precedent over 25 years ago in Bernstein v. U.S. Dep’t of State.  As the Tornado Cash situation develops, we want to be certain that those critical constitutional safeguards aren’t skirted or diluted. Below, we explain what those protections mean for regulation of software code.

Judge Patel, in the Bernstein case, explained why the First Amendment protects code, recognizing that there was:

 “no meaningful difference between computer language, particularly high-level languages …, and German or French … Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it. ... source code is speech.”  

The Sixth Circuit agreed, observing in Junger v. Daley, that code, like a written musical score, “is an expressive means for the exchange of information and ideas.” Indeed, computer code has been published in physical books and included in a famous Haiku. More directly, Jonathan Mann recently expressed code as music, by singing portions of the Tornado Case codebase.

Thus, the creation and sharing of a computer program is protected by the First Amendment, just as is the creation and performance of a musical work, a film, or a scientific experiment. Moreover, as Junger and Bernstein acknowledged, code retains its constitutional protection even if it is executable, and thus both expressive and functional. 

Establishing that code is speech protected by the Bill of Rights is not the end of the story. The First Amendment does not stop the government from regulating code in all cases. Instead, the government must show that any regulation or law that singles out speech or expressive activity passes constitutional muster. 

The first and key question is whether the regulation is based on the software’s communicative content.

In Reed v. Town of Gilbert, the Supreme Court has said that “defining regulated speech by particular subject matter” is an “obvious” content-based regulation.  More “subtle” content-based distinctions involve “defining regulated speech by its function or purpose” (emphasis added).

A regulation that prohibits writing or publishing code with a particular function or purpose, like encrypting communications or anonymizing individuals online, is necessarily content-based. At a minimum, it’s forbidding the sharing of information based on its topic.

The Legal Standards for First Amendment Scrutiny

Content-based laws face strict scrutiny, under which, as Reed explains, they “are presumptively unconstitutional and may be justified only if the government proves that they are narrowly tailored to serve compelling state interests.”

Thus, government regulation based on the content of code must be “narrowly tailored,” which means that laws must be written so narrowly that they are using the least restrictive means to achieve their purposes. This means that the government cannot place restrictions on more speech than is necessary to advance its compelling interest. Under Junger, functional consequences of code are not considered a bar to protection, but go to whether a regulation burdening the speech is appropriately tailored. 

The government frequently argues that regulations like this aren’t focused on content, but function. That’s incorrect, but even if the government were right, the regulation still doesn’t pass muster unless the government can show the regulation doesn’t burden substantially more speech than is necessary to further the government's legitimate interests. And the government must “demonstrate that the recited harms are real, not merely conjectural, and that the regulation will in fact alleviate these harms in a direct and material way.” (Turner Broad. Sys. v. F.C.C.).

Under either analysis, GitHub has a First Amendment right to continue to host independent copies of the Tornado Cash source code repository. Professor Green’s fork and publication through GitHub is protected, and neither the hosting nor the publication of these independent repositories violates the OFAC sanctions. 

The government may have legitimate concerns about the scourge of ransomware and harms presented by the undemocratic regime in the Democratic People’s Republic of Korea, but the harm from fund transfers does not come from the creation, publication, and study of the Tornado Cash source code for privacy-protective technologies. 

Nor will prevention of that publication alleviate the harms from any unlawful transfers over Tornado Cash. Indeed, given how the Ethereum network functions, whether or not Prof. Green publishes a copy of the code, the compiled operational code will continue to exist on the Ethereum network. It is not necessary to further the government's interest in sanction enforcement to prohibit the publication of this source code.

Moreover, improvements and other contributions to this fork, or any other, are also protected speech, and their publication cannot be constitutionally prohibited by the government under either standard of scrutiny. 

Based on thirty years of experience, we know that it takes a village to create and improve open source software. To ensure that developers can continue to create the software that we all rely upon, the denizens of that village must not be held responsible for any later unlawful use of the software merely because they contributed code. Research and development of software technology must be able to continue. Indeed, that very research and development may be the very way to craft a system that helps with this situation – offering us all options to both protect privacy in digital transactions and allow for the enforcement of sanctions.

What's Next

OFAC should do its part by publicly issuing some basic clarifying information and reducing the ambiguity in its order. Regardless of how one feels about cryptocurrency, mixers, or the blockchain, it’s critical that we ensure the ongoing protection of the development and publication of computer software, especially open source computer software. And while we deplore the misuse of this mixer technology to facilitate ransomware and money laundering, we must also ensure that steps taken to address it continue to honor the Constitution and protect the engines of innovation.

That’s why EFF’s role here is to continue to ensure that the First Amendment is properly interpreted to protect the publication, iteration and collective work of millions of coders around the world. 

Kurt Opsahl

Nonprofit Websites Are Full of Trackers. That Should Change.

3 months 1 week ago

Jump straight to the Online Privacy for Nonprofits Guide to Better Practices

Today, the vast majority of websites and emails that you encounter contain some form of tracking. Third-party cookies let advertisers follow you around the web; tracking pixels in emails confirm whether you’ve opened them; tracking links ensure websites know what you click; some websites even collect data on forms you’ve never actually submitted; still others share detailed interactions, such as appointments you’ve booked, with companies like Facebook. Each of these types of technology works by turning your actions into data: websites with tracking collect and store data about the site you are on, when, and what you are doing there; emails with tracking collect and store data about which email you opened and how you interacted with it. 

All of this amounts to an incredible amount of data about you being collected without your permission. That data doesn’t all end up in one place—sometimes it’s collected by individual websites, sometimes by ad tech companies, and sometimes by third parties you’ve never heard of. But regardless of who has the data, it amounts to a massive violation of user privacy that can have far-reaching consequences. Choosing to collect the data of supporters, clients and visitors isn’t just a marketing, monetary or ideological decision: it’s a decision that puts people in danger. In a post-Roe world, for example, law enforcement might use internet search histories, online purchases, tracked locations, and other parts of a person’s digital trail as evidence of criminal intent – indeed, they already have. 

If you are a nonprofit organization, you may be part of the problem. Unfortunately, a 2021 report from The Markup showed that many nonprofits don’t take threats to privacy seriously. That may be changing: Planned Parenthood, for example, has suspended the use of marketing trackers on some portions of their website in response to the dangers they could create for people seeking information on abortions. Hey Jane, an online provider of abortion pills, has also removed the Meta (Facebook) tracking pixel. 

But there is still significantly more to do. 

For example, you may use tools and software to improve the effectiveness of your marketing, and they may in turn collect copious amounts of data on visitors and clients. That data is often shared with third parties, and from there could make its way to law enforcement or into court. And even if you are working in a space where data collection doesn’t obviously endanger your clients or supporters, don’t forget that what is currently legal may not always be legal. For example, in 2021 legislatures in 22 states introduced bills to ban or otherwise criminalize best practice medical care for transgender young people. There are also many laws that are vague or conflicting: many states have legalized cannabis, for example, but the federal government still considers it illegal. 

Given all this, it’s no stretch to say that the data you’re collecting in order to further your mission could be weaponized against the very people you’re trying to support. Thankfully, it doesn’t have to be that way, and we can prove it—and show you how to fight back. 

We’ve made a guide intended for any nonprofit or civil society group that cares about privacy. Not all of the advice may apply to you, but all of the principles should be helpful for thinking about steps to move you towards better privacy practices. 

We recognize that some nonprofits may rely on various forms of data collection, or even on the surveillance advertising ecosystem, and may be nervous about changing that. In the reproductive rights space, for example, Google Adwords or Facebook ads may be a critical way to drive users to accurate information. For other organizations, knowing how users arrived at a website can be essential to determining the cost-effectiveness of promotional choices. 

It’s reasonable to want to know whether an ad worked —but that knowledge comes at the price of handing information about your users and clients to the control of a third party.  

Still, we understand many nonprofits may be reluctant to throw out all tracking or data collection, or the analytics tools that offer your organization important data. We aren’t asking you to do that. Instead, our goal is to give you the knowledge necessary to consider what data collection and tracking is essential to your mission and what isn’t, and to help you thrive while protecting the privacy of your supporters, clients, and users by finding alternative ways to get that information while respecting user privacy.

What’s Wrong With Tracking Your Users 

Nonetheless, many ad tech companies argue that pervasive online tracking helps users by connecting them with services and products they want. But this argument assumes that they want to be tracked by default. It ignores the damage done by the online surveillance ecosystem, particularly by behavioral advertising. And it ignores the many inaccurate or wrong conclusions ad tech companies make. In fact, there’s plenty of evidence that ad tech doesn’t work nearly as well as it claims, in part due to the fraud that runs rampant in the industry. (EDRi’s report, “Targeted Online” has a detailed breakdown of problems with the ad-tech industry if you’d like more information.) 

The reasons for NOT tracking are myriad: First, you’ll engender goodwill with your supporters. Second, you may not imagine your organization to be the likely target of ransomware or of a data breach, but the less data you collect, and the less you share with outside organizations or companies, the less likely that your supporters will be affected. Third, data privacy laws vary across regions, and we are in a time of rapid change with respect to those laws. Minimizing data collection and retention can help ensure you’re complying with those laws. 

Lastly, sensitive data on those in a variety of advocacy spaces has the potential to be weaponized by law enforcement. Whether you are a small or a large organization, holding onto significantly less data can make the legal process of discovery much less troubling for you–and for your supporters and clients. 

It bears repeating: what is currently legal may not always be legal; administrations change, and what is criminalized (and what laws are enforced, and how) shifts. For example: there are currently a record number of bills that specifically target LGBTQ+ youth that have been introduced or passed in the past year, most of which criminalize speech and healthcare. If law enforcement are interested in who is seeking that healthcare information, nonprofits working in that space may be targeted, and the data they have—in house, on servers, or in the cloud—may all be relevant. And in a post-Roe world, organizations or website operators that work in the reproductive rights space may receive subpoenas and warrants seeking user data that could be employed to prosecute abortion seekers, providers, and helpers. If Target can use recent purchases to determine a person is likely pregnant, law enforcement can use the data trail a pregnant person creates online to determine that they are considering (or did consider) abortion—and they already have. And many of the privacy concerns that worry us today are just the latest example of issues that have already been happening to many other people. 

Looking at all these reasons together, protecting privacy should be an obvious choice for most nonprofits and civil society organizations. And as if all this isn’t enough, there are plenty of other ways to gain powerful insights about users and supporters without collecting individualized data about their online activity. 

We know, because we walk the talk. For more than thirty years, EFF has fought to protect the rights of the user—the person who’s making use of a technology, such as a website or a smartphone. For us, that includes giving users the ability to choose to not be tracked, to remain anonymous or private, and to not have their data collected without their permission. In keeping with that mission, here’s what we do:

This Website Does Not Track

On the surface, EFF’s website looks pretty similar to other websites out there. But there’s one major difference: we are preserving your privacy to the very best of our ability. Where most sites collect and store significant amounts of visitor data, like your IP address, location, browser, device type, and more, we log only a single byte of your IP address, as well as the referrer page (how you got here, if it’s known), time stamp, user agent, language header, and a hash of all of this information. After seven days we keep only aggregate information from these logs. We also geolocate IP addresses before anonymizing them and store only the country. 

(You can read more about our website’s privacy practices in our privacy policy.)

This means that we have less information on visitors than most websites—if we look back at who visited the site a week ago, we can see how many visits from which countries each page received, but not where they came from, for example. But that is good enough for us to make decisions for our site and our advocacy. And we think it’s enough for most other nonprofits as well. 

“But doesn’t this make your work harder?” some of you may be asking. “How can you do research or marketing without these insights?” At times, yes, this lack of information makes our work very slightly more difficult. We rely on donors like you to support our work, and as an advocacy organization, we rely on digital activism to get the word out. Knowing which of our emails are the most read, or having easier access to detailed analytics data about the visitors to our website, could help us do both of these things slightly more effectively. But that would require us to collect large amounts of data about our users, supporters, and followers, and we don’t believe the trade-off is worth it. (We also recognize that unlike many organizations, EFF has on-staff engineers to help determine privacy options and implement them. Still, most groups should be able to take at least some of the steps listed here.)

EFF is an active, growing, and successful organization—as are plenty of other privacy-respecting nonprofits, like the Internet Archive and The Markup, not to mention companies like Basecamp.

So here’s our challenge to other nonprofit organizations and civil society groups, and companies, who care about user privacy: turn off tracking. 

If you’d like to join us, you can visit: Online Privacy for Nonprofits: A Guide to Better Practices

Jason Kelley

New Bill Would Bring Back Terrible Software and Genetic Patents

3 months 2 weeks ago

A recently introduced patent bill would authorize patents on abstract ideas just for including computer jargon, and would even legalize the patenting of human genes. The “Patent Eligibility Restoration Act,” sponsored by Sen. Thom Tillis (R-NC), explicitly overrides some of the most important Supreme Court decisions of the past 15 years, and would tear down some of the public’s only protections from the worst patent abuses. 

Pro-patent maximalists are trying to label the Tillis bill as a “consensus,” but it’s nothing of the sort. We need EFF supporters to send a message to Congress that it isn’t acceptable to allow patent trolls, or large patent-holders, to hold our technology hostage. 

TAKE ACTION

TELL THE SENATE TO REJECT THE TILLIS PATENT BILL

We Don’t Need 'Do it on a Computer' Patents

Starting in the late 1990s, the U.S. Court of Appeals for the Federal Circuit essentially did away with any serious limits on what could be patented. This court, the top patent appeals court in the U.S., allowed patents on anything that produced a “useful result,” even when that result was just a number. This allowed for a period of more than a decade during which the U.S. Patent Office issued, and the courts enforced, all kinds of ridiculous patents. 

Several Supreme Court decisions eventually limited the power of bad patents. Most importantly, the Supreme Court’s 2014 Alice Corp. v. CLS Bank decision made a clear rule—just adding “on a computer” to an abstract idea isn’t enough to make it patentable. 

The Alice Corp. decision was not a panacea. It did not eliminate the serious problem of patent trolls—that is, companies that have no products or services, but simply sue and threaten others over patents. But it did put a big dent in the patent trolling business. Vaguely worded software patents can still be used to extort money from software developers and small businesses. But when those patents can be challenged in court, they now rarely survive. 

That’s been a huge benefit for individuals and small businesses. Our “Saved by Alice” project details the stories of several small businesses that managed to overcome unjustified patent troll demands because of the Alice Corp. precedent. 

It’s now been eight years since the Alice Corp. decision, and judges have thrown out hundreds of bad patents that couldn’t stand up to this test. It’s likely that many more bad patents have been abandoned because their owners know they can’t keep using them to threaten people. The patents knocked down by Alice Corp. include: 

Ten years ago, there weren’t effective legal mechanisms to throw out the worst types of patents. If someone targeted by a patent troll felt the patent was wrongly granted, they’d likely have to pay millions of dollars in patent litigation costs just to take their chances in front of a jury. The Tillis bill will make it easier to use exactly the types of weak, overbroad patents that often threaten startups and small businesses

Since the Alice Corp. decision, it’s much harder to demand money using questionable patents. That’s why patent trolls, among others, don’t like the decision, and would like to see a bill like this pass to override it. But the Senate should not grant this wish. 

No Patents on 'Business Methods' or 'Mental Processes'

The Tillis bill encodes a version of the old rule that virtually any kind of “business method” is worthy of a patent. It explicitly allows for patents on “non-technological economic, financial, business, social, cultural, or artistic process,” as long as those are embodied in a “machine or manufacture.” 

In other words, you can take basic human “methods” of doing business, or even socializing, and just add a generic purpose computer (or another machine). The Tillis bill does specify that the machine must do more than “merely storing or executing,” but that’s an unclear if not meaningless narrowing. That will merely allow patent lawyers to avoid using those exact verbs—“storing” and “executing”—when they’re writing patents. 

Software patents are drafted by patent lawyers, who have come up with a lot more ways to describe manipulating data than just “storing” and “executing.” To take just one of the stupid patents above, the first claim in the Ultramercial ad-watching patent described an Internet-based process of “receiving” media products, “selecting” a sponsor message, “providing” the media to the public for sale, “restricting” general access, “facilitating” display of the ad, “recording the transaction,” and also “receiving payment.” 

The Tillis bill even implicitly authorizes patents on a “mental process,” saying the only kind that wouldn’t be eligible is one that takes place “solely in the human mind.” That would seem to imply that even adding trivial steps like writing things down or communicating information could make a “mental process” patentable. 

If Congress passes the Patent Eligibility Restoration Act, it will destroy one of our best safeguards against abusive patents. The Tillis bill will give an explicit green light to the most aggressive patent trolls, the funders of their litigation, and the attorneys who work for them. They’ll get more outrageous business method patents, and use them to demand payments from working software developers. 

TAKE ACTION

TELL THE SENATE TO REJECT THE TILLIS PATENT BILL

Patenting Human Genes is Wrong and Should Remain Illegal

The Patent Eligibility Restoration Act’s negative impacts won’t be limited to software. The bill proposes to overturn the Supreme Court’s clear rule against getting patents on human genes. 

Genes aren’t inventions. They exist in nature. But for about two decades, the U.S. Patent and Trademark Office wrongfully granted thousands of patents on human genes. The companies seeking these patents claimed that because they had “isolated” the genes outside the body, they should be allowed to hold patents on them. 

One of those companies was Myriad Genetics, which owned patents on two human genes associated with breast and ovarian cancer, BRCA1 and BRCA2. Myriad shut competitors out of the testing market, charged monopoly prices, and even sent cease-and-desist letters to universities and institutes that wanted to do BRCA testing on their own patients. 

In 2009, the ACLU filed a lawsuit challenging the validity of those patents. The ACLU represented patients who couldn’t afford Myriad’s high-priced patented test, and doctors who wanted to perform those tests, but had been stopped by Myriad’s threats. The case ultimately went to the Supreme Court, which struck down the patents. “Separating that gene from its surrounding genetic material is not an act of invention,” stated the Court. 

Incredibly, the Tillis bill carves out the exact same loophole that was shut down by the ACLU’s lawsuit. It supposedly bans patents on “an unmodified human gene, as that gene exists in the body,” but then in the very next section says that any gene that has been “isolated, purified, enriched, or otherwise altered” would be eligible. 

This is the same patent abuse loophole that Myriad used to take advantage of cancer patients and their doctors. That’s why the ACLU has called the bill “a gift to patent lawyers and predatory companies” that risks “the creation of a disturbing market for exclusive rights over material found in nature.” 

We agree. Patent trolls and a few companies that want to make money off patent threats have portrayed this bill as a “consensus,” in the hopes of making it a baseline for negotiations in an upcoming Congress. In reality, it’s an extreme piece of legislation that should be rejected. Tillis’ bill would revive some of the cruelest patent abuses of the past two decades. 

TAKE ACTION

TELL THE SENATE TO REJECT THE TILLIS PATENT BILL

Joe Mullin

Where’s EFF? Why EFF Is Sometimes Quiet About Important Cases and Issues

3 months 2 weeks ago

When legal issues light up the Internet, people turn to EFF for answers. Whether it’s attacks on coders' rights, overreaching copyright claims online, or governments' efforts to censor or spy on people, we are often among the first to hear about troubling events online, and we're frequently the first place people turn to, both for help and for a broader understanding. 

So why are there times when we’re quiet about something big that is happening around digital rights?  Why are there times when we only say general things and don’t take a firm position, drill down into specifics, or provide the legal analysis that we are famous for? We know it can be frustrating, and can lead folks to jump to conclusions that we don’t care, or aren’t watching.

But most of the time, that’s not the case. Instead, we are being quiet or vague for one of three reasons: to protect the people who have asked us for help, because of a specific court requirement, or because we’re investigating and putting a strategy into place. Quite often, it’s some combination of those.    

First, and most of the time, we are protecting the folks who have reached out to us for help.  The legal protections for attorney/client communications and attorney work product allow lawyers and their prospective or existing clients to speak frankly with each other and to honestly evaluate the strengths and weaknesses of their cases. But these communications and notes must be kept strictly confidential in order to remain protected.  If the confidentiality is broken by either the lawyer or the client, the person or a person's attorney can be required to reveal their communications, legal strategies, and evaluations to their opponents. The stakes here can be very high, since that can include the opposing lawyers in a civil case or prosecutors who can put them in jail. Breaching these privileges can seriously hurt the people who ask us for help and undermine our chances of winning a case, so we are very careful to avoid doing so. Indeed, we have strict ethical duties as lawyers to do this. 

Many times, there are multiple people seeking our help, and we need to take time to investigate and decide if we can help them. Even when we cannot, or we cannot help all of them, we almost always take steps to find them other lawyers. And even if we ultimately don’t represent someone, we often learn potentially privileged information as we help find them lawyers. So during this sensitive period, we often stay quiet or say things that are more general and that do not risk the legal or factual status of anyone who has reached out to us for help.  Once they are safely in the care of other lawyers, we can usually talk a bit more freely, but often still need to keep some things confidential to preserve the privilege.   

Second, at times courts or laws limit our ability to speak. EFF’s work combating National Security Letters (NSLs) is a good example. We had to hide our work representing CREDO Mobile, Cloudflare and earlier, the Internet Archive, as we worked to free them from the gag orders that surround NSLs.  In the case of CREDO and Cloudflare it took years before we could reveal our relationship, which meant that we could not comment as specifically as we would have liked to on a number of things related to the case, and on the overarching issue. This led to many awkward and frustrating conversations with EFF members, as well as with reporters, and even members of Congress. 

In general, we press as hard as we can to get the legal proceedings made public, especially for cases involving important personal privacy and free speech implications. In nearly all court-ordered or law-required gag situations so far, we have ultimately been able to get the court records unsealed.

Finally, there are times when we are simply not finished investigating a case or situation to determine whether to take it, or are taking the initial steps to put a strategy into place. Here’s a page outlining some of the things we consider when making those decisions. This often involves not only gathering background information, but also conducting a legal and technological analysis of the situation. In short, while EFF is often not going to have an immediate hot take on a situation that EFF is directly involved in, when we do speak we will be technically and legally accurate. Where we can, we will tell you where things are going and should go in order to protect the Internet. 

While working through this process, the worst thing we could do is to talk publicly before we put a legal strategy in place, understand the technical details, and solidify our role.  This is especially true when the legal situation is in flux, as when emergency legal relief is sought, or when some of the people potentially involved have not yet been notified or identified. 

However, none of this should keep EFF members, the press, or the public from emailing us at info@eff.org when something is happening that potentially requires EFF's involvement. EFF members and the general public are an essential part of our early warning system – crowdsourcing that helps us have a much broader view of what’s going on, and where the important cases are occurring. 

But we hope you will understand if we answer your call or email with limited detail or if we hold back from commenting extensively in the press, on Twitter or other social media, or on our blog. Feel free to point others to this post if they raise concerns. Just because we’re silent or give answers that seem incomplete in public doesn’t mean we don’t care or aren’t working furiously on the issue. 

We believe strongly that everyone’s rights online should be vigorously protected, and sometimes that requires us to be silent.

Related Cases: Barr v. Redacted - Under Seal NSL Challenge 2016 Internet Archive NSL
Cindy Cohn

Arrest of a Stalkerware-maker in Australia Underscores Link Between Stalkerware and Domestic Abuse

3 months 2 weeks ago

The ease with which bad actors can find a worldwide market for malicious apps that spy on people’s digital devices is at the center of an Australian Federal Police case against a man who, starting at the age of 15, wrote a stalkerware application and sold it to 14,500 people in 128 countries.

Australian police last month arrested the man, now 24, and identified at least 201 of his Australian customers, in an investigation that began in 2017 and involved a dozen law enforcement agencies in Europe and Australia, and information provided by Palo Alto Networks and the FBI. The case underscores the sheer scope of the market for stalkerware—the app, costing just $35, was sold for seven years before law enforcement shut it down. Tens of thousands of victims were spied on, police said. Its customers included domestic violence perpetrators and even a child sex offender.

Stalkerware—commercially-available apps that are designed to be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent—continues to be a huge threat to consumers in general and to survivors of domestic abuse in particular. Research indicates that tens of thousands of people around the world are victims of stalkerware each year; the actual number is probably much higher due to underreporting.

Media outlets reported that Australian police arrested Jacob Wayne John Keen, the creator of Imminent Monitor stalkerware, on July 24. The tool, one of thousands of commodity Remote Access Tools, or, aptly, RATs, was designed to spy on computers running Windows. The spyware could be installed remotely on a victim’s computer, without their knowledge, though phishing, where a user is duped into opening an email or text message that looks legitimate but then takes control of the computer without the user’s knowledge or consent.

Much of the focus in the discussion of stalkerware is on the malicious apps that run on mobile devices. Stalkers can use those apps to track victims’ locations, as well as other privacy-invasive uses. But stalkerware that runs on computers is also very dangerous, providing perpetrators access to a lot of sensitive user information, including all passwords and documents. Imminent Monitor, once installed on a victim’s computer, could turn on their webcam and microphone, allow perpetrators to view their documents, photographs, and other files, and record all keystrokes entered.

Imminent Monitor’s creator tried to maintain that the app was a legitimate remote desktop utility (monitoring apps are often used for spying by stalkers). But Palo Alto Networks report noted that Imminent Monitor hawked nefarious features that kept the presence of the app secret from the user and mined the victim’s computer for cryptocurrency. 

The law enforcement investigation of the app targeted both sellers and users. The Australian police were able to identify both the Australian offenders who bought the software and the victims they targeted, which they said was a first for any law enforcement agency. Two hundred and one buyers were identified in Australia alone—half of whom were identified through their PayPal records.

Australian police said that a statistically high percentage of those customers were respondents on domestic violence orders. The app was sold to buyers in 128 countries before its web page was taken down in late 2019, when 85 warrants were executed in Australia and Belgium, 434 devices seized, including the app-makers custom-built computer, and 13 of the apps most prolific users were arrested. The investigation involved actions in Colombia, Czechia, the Netherlands, Poland, Spain, Sweden and the United Kingdom.

Imminent Monitor’s creator was charged with six counts of committing computer offenses, which together carry a maximum sentence of 20 years in prison.

Every time a stalkerware app is taken down, it’s a victory for users everywhere. Unfortunately, we know that those caught are just the tip of the iceberg. Still, the Imminent Monitor investigation and takedown should serve as a deterrent and send a strong message that, while stalkerware app creators are on the hunt for customers, defenders of spyware victims are on the hunt for them.

Karen Gullo

Bad Data “For Good”: How Data Brokers Try to Hide Behind Academic Research

3 months 2 weeks ago

When data broker SafeGraph got caught selling location information on Planned Parenthood visitors, it had a public relations trick up its sleeve. After the company agreed to remove family planning center data from its platforms in response to public outcry, CEO Auren Hoffman tried to flip the narrative: he claimed that his company’s harvesting and sharing of sensitive data was, in fact, an engine for beneficial research on abortion access. He even argued that SafeGraph’s post-scandal removal of the clinic data was the real problem: “Once we decided to take it down, we had hundreds of researchers complain to us about…taking that data away from them.” Of course, when pressed, Hoffman could not name any individual researchers or institutions.

SafeGraph is not alone among location data brokers in trying to “research wash” its privacy-invasive business model and data through academic work. Other shady actors like Veraset, Cuebiq, Spectus, and X-Mode also operate so-called “data for good” programs with academics, and have seized on the pandemic to expand them. These data brokers provide location data to academic researchers across disciplines, with resulting publications appearing in peer-reviewed venues as prestigious as Nature and the Proceedings of the National Academy of Sciences. These companies’ data is so widely used in human mobility research—from epidemic forecasting and emergency response to urban planning and business development—that the literature has progressed to meta-studies comparing, for example, Spectus, X-Mode, and Veraset datasets

Data brokers variously claim to be bringing “transparency” to tech or “democratizing access to data.” But these data sharing programs are nothing more than data brokers’ attempts to control the narrative around their unpopular and non-consensual business practices. Critical academic research must not become reliant on profit-driven data pipelines that endanger the safety, privacy, and economic opportunities of millions of people without any meaningful consent. 

Data Brokers Do Not Provide Opt-In, Anonymous Data

Location data brokers do not come close to meeting human subjects research standards. This starts with the fact that meaningful opt-in consent is consistently missing from their business practices. In fact, Google concluded that SafeGraph’s practices were so out of line that it banned any apps using the company’s code from its Play Store, and both Apple and Google banned X-Mode from their respective app stores. 

Data brokers frequently argue that the data they collect is “opt-in” because a user has agreed to share it with an app—even though the overwhelming majority of users have no idea that it’s being sold on the side to data brokers who in turn sell to businesses, governments, and others. Technically, it is true that users have to opt in to sharing location data with, say, a weather app before it will give them localized forecasts. But no reasonable person believes that this constitutes blanket consent for the laundry list of data sharing, selling, and analysis that any number of shadowy third parties are conducting in the background. 

No privacy-preserving aggregation protocols can justify collecting location data from people without their consent.

On top of being collected and shared without consent, the data feeding into data brokers’ products can easily be linked to identifiable people. The companies claim their data is anonymized, but there’s simply no such thing as anonymous location data. Information about where a person has been is itself enough to re-identify them: one widely cited study from 2013 found that researchers could uniquely characterize 50% of people using only two randomly chosen time and location data points. Data brokers today collect sensitive user data from a wide variety of sources, including hidden tracking in the background of mobile apps. While techniques vary and are often hidden behind layers of non-disclosure agreements (or NDAs), the resulting raw data they collect and process is based on sensitive, individual location traces.

Aggregating location data can sometimes preserve individual privacy, given appropriate parameters that take into account the number of people represented in the data set and its granularity. But no privacy-preserving aggregation protocols can justify the initial collection of location data from people without their voluntary, meaningful opt-in consent, especially when that location data is then exploited for profit and PR spin. 

Data brokers’ products are notoriously easy to re-identify, especially when combined with other data sets. And combining datasets is exactly what some academic studies are doing. Published studies have combined data broker location datasets with Census data, real-time Google Maps traffic estimates, and local household surveys and state Department of Transportation data. While researchers appear to be simply building the most reliable and comprehensive possible datasets for their work, this kind of merging is also the first step someone would take if they wanted to re-identify the data. 

NDAs, NDAs, NDAs

Data brokers are not good sources of information about data brokers, and researchers should be suspicious of any claims they make about the data they provide. As Cracked Labs researcher Wolfie Christl puts it, what data brokers have to offer is “potentially flawed, biased, untrustworthy, or even fraudulent.” 

Some researchers incorrectly describe the data they receive from data brokers. For example, one paper describes SafeGraph data as “anonymized human mobility data” or “foot traffic data from opt-in smartphone GPS tracking.” Another describes Spectus as providing “anonymous, privacy-compliant location data” with an “ironclad privacy framework.” Again, this location data is not opt-in, not anonymized, and not privacy-compliant.

Other researchers make internally contradictory claims about location data. One Nature paper characterizes Veraset’s location data as achieving the impossible feat of being both “fine-grained” and “anonymous.” This paper further states it used such specific data points as “anonymized device IDs” and “the timestamps, and precise geographical coordinates of dwelling points” where a device spends more than 5 minutes. Such fine-grained data cannot be anonymous. 

All of this should be a red flag for Institutional Review Boards, which need visibility into whether data brokers actually obtain consent.

A Veraset Data Access Agreement obtained by EFF includes a Publicity Clause, giving Veraset control over how its partners may disclose Veraset’s involvement in publications. This includes Veraset’s prerogative to approve language or remain anonymous as the data source. While the Veraset Agreement we’ve seen was with a municipal government, its suggested language appears in multiple academic publications, which suggests a similar agreement may be in play with academics.

A similar pattern appears in papers using X-Mode data: some use nearly verbatim language to describe the company. They even claim its NDA is a good thing for privacy and security, stating: “All researchers processed and analyzed the data under a non-disclosure agreement and were obligated to not share data further and not to attempt to re-identify data.” But those same NDAs prevent academics, journalists, and others in civil society from understanding data brokers’ business practices, or identifying the web of data aggregators, ad tech exchanges, and mobile apps that their data stores are built on.

All of this should be a red flag for Institutional Review Boards, which review proposed human subjects research and need visibility into whether and how data brokers and their partners actually obtain consent from users. Likewise, academics themselves need to be able to confirm the integrity and provenance of the data on which their work relies.

From Insurance Against Bad Press to Accountable Transparency

Data sharing programs with academics are only the tip of the iceberg. To paper over the dangerous role they play in the online data ecosystem, data brokers forge relationships not only with academic institutions and researchers, but also with government authorities, journalists and reporters, and non-profit organizations. 

The question of how to balance data transparency with user privacy is not a new one, and it can’t be left to the Verasets and X-Modes of the world to answer. Academic data sharing programs will continue to function as disingenuous PR operations until companies are subjected to data privacy and transparency requirements. While SafeGraph claims its data could pave the way for impactful research in abortion access, the fact remains that the very same data puts actual abortion seekers, providers, and advocates in danger, especially in the wake of Dobbs. The sensitive data location data brokers deal in should only be collected and used with specific, informed consent, and subjects must have the right to withdraw that consent at any time. No such consent currently exists.

We need comprehensive federal consumer data privacy legislation to enforce these standards, with a private right of action to empower ordinary people to bring their own lawsuits against data brokers who violate their privacy rights. Moreover, we must pull back the NDAs to allow research investigating these data brokers themselves: their business practices, their partners, how their data can be abused, and how to protect the people whom data brokers are putting in harm’s way.

Gennie Gebhart

General Monitoring is not the Answer to the Problem of Online Harms

3 months 2 weeks ago

Even if you think that online intermediaries should be more proactive in detecting, deprioritizing, or removing certain user speech, the requirements on intermediaries to review all content before publication—often called “general monitoring” or “upload filtering”—raises serious human rights concerns, both for freedom of expression and for privacy.

General monitoring is problematic both when it is directly required by law and when, though not required, it is effectively mandatory because the legal risks of not doing it are so great. Specifically, these indirect requirements incentivize platforms to proactively monitor user behaviors, filter and check user content, and remove or locally filter anything that is controversial, objectionable, or potentially illegal to avoid legal responsibility. This inevitably leads to over censorship of online content as platforms seek to avoid liability for failing to act “reasonably” or remove user content they “should have known” was harmful.

Whether directly mandated or strongly incentivized, general monitoring is bad for human rights and for users. 

  • As the scale of online content is so vast, general monitoring commonly uses automated decision-making tools that reflect the dataset’s biases and lead to harmful profiling.
  • These automated upload filters are prone to error, are notoriously inaccurate, and tend to overblock legally protected expressions.
  • Upload filters also contravene the foundational human rights principles of proportionality and necessity by subjecting users to automated and often arbitrary decision-making.
  • The active observation of all files uploaded by users has a chilling effect on freedom of speech and access to information by limiting the content users can post and engage with online.
  • A platform reviewing every user post also undermines users privacy rights by providing companies, and thus potentially government agencies, with abundant data about users. This is particularly threatening to anonymous speakers.
  • Pre-screening can lead to enforcement overreach, fishing expeditions (undue evidence exploration), and data retention.
  • General monitoring undermines the freedom to conduct business, adds compliance costs, and undermines alternative platform governance models.
  • Monitoring technologies are even less effective at small platforms, which don’t have the resources to develop sophisticated filter tools. General monitoring thus cements the gatekeeper role of a few power platforms and further marginalizes alternative platform governance models.

We have previously expressed concern about governments employing more aggressive and heavy-handed approaches to intermediary regulation, with policymakers across the globe calling on platforms to remove allegedly legal but ‘undesirable’ or ‘harmful’ content from their sites, while also expecting platforms to detect and remove illegal content. In doing so, states fail to protect fundamental freedom of expression rights and fall short of their obligations to ensure a free online environment with no undue restrictions on legal content, whilst also restricting the rights of users to share and receive impartial and unfiltered information. This has a chilling effect on the individual right to free speech wherein users change their behavior and abstain from communicating freely if they know they are being actively observed—leading to a pernicious culture of self-censorship.

In one of the more recent policy developments on intermediary liability, the European Union recently approved the Digital Services Act (DSA). The DSA rejects takedown deadlines that would have suppressed legal, valuable, and benign speech. EFF helped to ensure that the final language steered clear of intrusive filter obligations. By contrast, the draft UK Online Safety Bill raises serious concerns around freedom of expression by imposing a duty of care on online platforms to tackle illegal and otherwise harmful content and to minimize the presence of certain content types. Intrusive scanning of user content will be unavoidable if this bill becomes law.

So how do we protect user rights to privacy and free speech whilst also ensuring illegal content can be detected and removed? EFF and other NGOs have developed the Manila Principles which emphasize that intermediaries shouldn’t be held liable for user speech unless the content in question has been fully adjudicated as illegal and a court has validly ordered its removal. It should be up to independent, impartial, and autonomous judicial authorities to determine that the material at issue is unlawful. Elevating courts to adjudicate content removal means liability is no longer based on the inaccurate and heavy-handed decisions of platforms. This would also ensure that takedown orders are limited to the specific piece of illegal content as decided by courts or similar authority. 

EFF has also previously urged that regulators ensure online intermediaries continue to benefit from exemptions on liability for third-party content, and any additional obligations must not curtail free expression and consumer innovation. To restrict content, these rules must be provided by laws; be precise, clear, and accessible; and must follow due process and respect the principle that independent judicial authorities should assess content and decide on its restriction. Decisively, intermediaries should not be held liable if they choose not to remove content based on a mere notification by users. 

Regulators must take more effective voluntary actions against harmful content and adopt moderation frameworks that are consistent with human rights to make the internet free and limit the power of government agencies in flagging and removing potentially illegal content.

Paige Collings
Checked
52 minutes 25 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed