California Leads on Reproductive and Trans Health Data Privacy

2 days 8 hours ago

In the wake of the Supreme Court’s Dobbs decision, anti-choice sheriffs and bounty hunters will try to investigate and punish abortion seekers based on their internet browsing, private messaging, and phone app location data. We can expect similar tactics from state officials who claim that parents who allow their transgender youth to receive gender-affirming health care should be investigated for child abuse.

So it is great news that California Gov. Gavin Newsom just signed three bills that will help meet these threats: A.B. 1242, authored by Asm. Rebecca Bauer-Kahan; A.B. 2091, authored by Asm. Mia Bonta; and S.B. 107, authored by Sen. Scott Wiener. EFF supported all three bills.

This post summarizes the new California data privacy safeguards and provides a breakdown of the specific places where they change California state law. For those interested, we have included the citations to these changes. These three new laws limit how California courts, government agencies, health care providers, and businesses handle this data. Some provisions create new exemptions from existing disclosure mandates; others create new limits on disclosure.

EFF encourages other states to consider passing similar bills adapted to their own state civil and criminal laws.

New Reproductive and Trans Health Data Exemptions from Old Disclosure Mandates

Law enforcement agencies and private litigants often seek evidence located in other states. In response, many states have enacted various laws that require in-state entities to share data with out-of-state entities. Now that anti-choice states are criminalizing more and more abortions, pro-choice states should create abortion exceptions from these sharing mandates. Likewise, now that anti-trans states are claiming that gender-affirming care for trans youth is child abuse, pro-trans states should create trans health care exceptions from these sharing mandates. California’s new laws do this in three ways.

First, an existing California law provides that California-based providers of electronic communication and remote computing services, upon receipt of an out-of-state warrant, must treat it like an in-state warrant. A.B. 1242 creates an abortion exemption. A provider cannot produce records if it “knows or should know” that the investigation concerns a “prohibited violation.” (See Sec. 8, at Penal Code 1524.2(c)(1)) A “prohibited violation” is an abortion that would be legal in California but is illegal elsewhere. (See Sec. 2, at Penal Code 629.51(5)) Further, warrants must attest that the investigation does not involve a prohibited violation. (See Sec. 8, at Penal Code 1524.2(c)(2))

Second, an existing California law requires state courts to assist in enforcing out-of-state judicial orders. This is California’s version of the Uniform Law Commission’s (ULC’s) Interstate Depositions and Discovery Act. It requires California court clerks to issue subpoenas on request of litigants that have a subpoena from an out-of-state judge. California lawyers may issue subpoenas in such circumstances, too.

A.B. 2091 and S.B. 107 create new abortion and transgender health exemptions to this existing law:

Third, an existing California law requires health care providers to disclose certain kinds of medical information to certain kinds of entities. A.B. 2091 and S.B. 107 create new abortion and transgender health exemptions to this existing law:

  • Providers cannot release medical information about abortion to law enforcement, or in response to a subpoena, based on either an out-of-state law that interferes with California abortion rights, or a foreign penal civil action. (See A.B. 2091, Sec. 2, at Civil Code 56.108)
  • Providers also cannot release medical information about a person allowing a child to receive gender-affirming care, in response to an out-of-state criminal or civil action against such a person. (See S.B. 107, Sec. 1, at Civil Code 56.109; Sec. 10, at Penal Code 1326(c))

All of these new exemptions from old sharing mandates are important steps forward. But that’s not all these three new California bills do.

New Limits on California Judges

 To protect the privacy of people seeking reproductive health care, these new laws limit the power of California courts to authorize or compel the disclosure of reproductive health data.

First, A.B. 1242 prohibits California judges from authorizing certain forms of digital surveillance, if conducted for purposes of investigating abortions that are legal in California. These are:

  • Interception of wire or electronic communications. (See Sec. 3, at Penal Code 629.52(e)) Interception captures communications content, such as the words of an email.
  • A pen register or trap and trace device. (See Sec. 5, at Penal Code 638.52(m)) These devices capture communications metadata, such as who called whom and when.
  • A warrant for any item. (See Sec. 7, at Penal Code 1524(h)) This would include digital devices that contain evidence of an abortion, such as a calendar entry.

Second, A.B. 1242 prohibits California judges and court clerks from issuing subpoenas connected to out-of-state proceedings about an individual performing, supporting, aiding, or obtaining a lawful abortion in California. (See Sec. 11, at Penal Code 13778.2(c)(2))

Third, A.B. 2091 bars state and local courts from compelling a person to identify, or provide information about, a person who obtained an abortion, if the inquiry is based on either an out-of-state law that interferes with abortion rights, or a foreign penal civil action. This safeguard also applies in administrative, legislative, and other government proceedings. (See Sec. 6, at Health Code 123466(b))

New Limits on California Government Agencies

Government agencies can also be the source of information regarding reproductive and transgender health care. For example, police might be able to identify who traveled to a health care facility, and government facilities can identify who received what care. So the bills create two new limits on disclosure of health care data by California government agencies.

First, A.B. 1242 and S.B. 107 bar all state and local government agencies in California, and their employees, from providing information to any individual or out-of-state agency regarding:

Third, A.B. 2091 bars prison staff from disclosing medical information about an incarcerated person’s abortion, if the request is based on either an out-of-state law that interferes with California abortion rights, or a foreign penal civil action. (See Sec. 8, at Penal Code 3408(r))

New Limit on California Communication Services

Finally, A.B. 1242 provides a new safeguard to protect people from disclosure requests made to a type of company that holds their information. These are California corporations, and corporations with principal offices in California, that provide electronic communication services. They shall not, in California, provide “records, information, facilities, or assistance” in response to out-of-state legal process (such as a warrant or other court order) related to a prohibited violation. (See Sec. 9, at Penal Code 1546.5(a)) The California Attorney General may enforce this rule. (See Sec. 9, at Penal Code 1546.5(b)) However, covered corporations are not subject to any cause of action for providing such assistance in response to such legal process, unless the corporation “knew or should have known” that the legal process related to a prohibited violation. (See Sec. 9, at Penal Code 1546.5(c))

Next Steps

These three new California laws—A.B. 1242, A.B. 2091, and S.B. 107—are strong protections of reproductive and transgender health data privacy. Other pro-choice and pro-trans states should enact similar laws.

More work remains in California. After these important new laws go into effect, we can expect anti-choice sheriffs and bounty hunters to continue seeking abortion-related data located in the Golden State. So will out-of-state officials seeking to punish parents who allow their kids to get gender-affirming health care. California policymakers must be vigilant, and enact new laws as needed. For example, an existing California law, based on another ULC model, authorizes state courts to command a resident to travel out-of-state to testify in a criminal proceeding. This law may also need an exemption for abortion-related and trans-related information. California officials should also work with companies to identify efforts by anti-choice and anti-trans states to circumvent these new protections and use every tool at their disposal to respond.

Adam Schwartz

EFF to NJ court: Give defendants information regarding police use of facial recognition technology

3 days 5 hours ago

We’ve all read the news stories: study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations. By refusing to disclose the specifics of that process, law enforcement have effectively prevented criminal defendants from challenging the reliability of the technology that ultimately lead to their arrest.

This week, EFF, along with EPIC and NACDL, filed an amicus brief in State of New Jersey v. Francisco Arteaga, urging a New Jersey appellate court to allow robust discovery regarding law enforcement’s use of facial recognition technology. In this case, a facial recognition search conducted by the NYPD for NJ police was used to determine that Francisco Arteaga was a “match” of the perpetrator in an armed robbery. Despite the centrality of the match to the case, nothing was disclosed to the defense about the algorithm that generated it, not even the name of the software used. Mr. Arteaga asked for detailed information of the search process, with an expert testifying the necessity of that material, but the court denied those requests.

Comprehensive discovery regarding law enforcement’s facial recognition searches is crucial because, far from being an infallible tool, the process entails numerous steps, all of which have substantial risk of error. These steps include selecting the “probe” photo of the person police are seeking, editing the probe photo, choosing photo databases to which the edited probe photo is compared, the specifics of the algorithm that performs the search, and human review of the algorithm’s results.

Police analysts often select a probe photo from a video still or a cell phone camera, which are more likely to be low quality. The characteristics of the chosen image, including its resolution, clarity, face angle, lighting, etc. all impact the accuracy of the subsequent algorithmic search. Shockingly, analysts may also significantly edit the probe photo, using tools closely resembling those in Photoshop in order to remove facial expressions or insert eyes, combining face photographs of two different people even though only one is of the perpetrator, using the blur effect to add pixels into a low quality image, using the cloning tool or 3D modeling to add parts of a subject’s face not visible on the original photo. In one outrageous instance, when the original probe photo returned no potential matches by the algorithm, the analyst from the NYPD Facial Identification Section, who thought the subject looked like actor Woody Harrelson, ran another search using the celebrity’s photo instead. Needless to say, these changes significantly elevate the risk of misidentification.

The database of photos to which the probe photo is compared, which could include mugshots, DMV photos or other sources, can also impact the accuracy of the results depending on the population that makes up those databases. Mugshot databases will often include more photos of folks in over-policed communities and the resulting errors in the search is more likely to impact members of those groups.

The algorithms used by law enforcement are typically developed by private companies and are “black box” technology — it is impossible to know exactly how the algorithms reach their conclusions without looking at their source code. Each algorithm is developed by different designers, and trained using different datasets. The algorithms create “templates,” also known as “facial vectors,” of the probe photograph and the photographs in the database, but different algorithms will focus on different points of a face in creating those templates. Unsurprisingly, even when comparing the same probe photo to the same databases, different algorithms will produce different results.

Although human analysts will review the probe photo and candidate list generated by the algorithm for the match to be investigated, numerous studies have shown that humans are prone to misidentifying unfamiliar faces and are subject to the same biases present in facial recognition systems. Human review is also impacted by many other factors, including the analyst’s innate ability to analyze faces, motivation to find a match, fatigue from performing a repetitive task, time limitations, and cognitive and contextual biases.

Despite the grave risk of error, law enforcement remains reticent about its facial recognition systems. In filing this brief, EFF continues to advocate for transparency regarding law enforcement technology.

Hannah Zhao

Victory! Court Unseals Records Showing Patent Troll’s Shakedown Efforts

3 days 5 hours ago

EFF has prevailed in a years-long effort to make public a series of court records that show how a notorious patent troll, Uniloc, uses litigation threats to extracts payments from a variety of businesses.

Uniloc earlier this month complied with a federal district court’s unsealing order by making public redacted versions of several previously sealed documents. That ended more than three years’ worth of litigation, including two appeals, in which EFF sought public access to judicial records in a case between Uniloc and Apple that was shrouded in secrecy.

The case began in 2018 as an effort to make sense of heavily redacted filings in the patent infringement case between Uniloc and Apple. It resulted in greater transparency into how Uniloc coerces businesses into licensing its weak patents, and shields its activities from public scrutiny by claiming any information about its licenses amounts to a trade secret.

The great majority of Uniloc’s previously secret court records are now public. For instance, a list of Uniloc’s trolling victims, which Uniloc sought to keep entirely under seal, is now more than 80% unredacted. The list of the amounts those companies paid is now more than 70% unredacted. Several other key documents—like the contract between Uniloc and a private equity firm that allowed for this patent trolling expedition in the first place—are entirely public.

Although the court ruling did not require Uniloc to make all of its previously secret records public, we are pleased that it rejected Uniloc’s repeated attempts to get blanket secrecy for licensing information, compelled it to disclose information that should never have been sealed in the first place, and affirmed the public’s right to access records filed in federal court.

Unsealed licensing document shows how Uniloc financed its litigation

EFF’s chief purpose in intervening in this case was to understand Apple’s reasons for arguing that Uniloc’s patent lawsuit should be dismissed. Apple supported its argument with evidence showing Uniloc did not have the legal right to assert the patents it accused Apple of infringing. Although the district court ultimately agreed with Apple and dismissed Uniloc’s suit, the key evidence it relied on remained secret. That evidence included a table showing how much money Uniloc made by extracting license payments from companies. Why did Uniloc create that table? To convince a massive private equity firm, Fortress, to give it money to demand payments from other companies—and sue those who tried to resist.

The court’s most recent order provided the public with the first meaningful look at the licensing table by unsealing the identities of more than 70 companies that paid Uniloc a license as well as the amounts they paid. Although the court agreed to redact the names and payments of a small number of companies that submitted statements to the court, it granted the public access to most of the information in the table as well as the total amount of revenue—$105 million—that Uniloc made from these payments.

None of this information should have ever been sealed. Yet as the below images show, it took years of advocacy by EFF to go from a nearly unreadable document to one that sheds light on how Uniloc obtains license payments. For example, the table shows that sometimes Uniloc licenses patents for as little as $2,500, while Activision Blizzard, Inc. paid $3.5 million for a license. But most payments are for less than $300,000. According to the FTC, when patent owners settle for that little, it’s usually a sign of patent trolling, which occurs when the threat of expensive litigation is used to extract settlements rather than to vindicate a patent infringement claim.

Here's the patent-licensing table that Uniloc filed with the court before the unsealing order:


Here's the same table after the unsealing order: 

uniloc_-_unsealed_page_of_license_table.png Other unsealed records highlight the absurdity of Uniloc’s trade secrets claims

Throughout the transparency fight, Uniloc and Apple argued that any details about the companies that paid Uniloc must remain completely under seal to protect those companies’ trade secrets. The court’s most recent order largely rejected those claims. And newly unsealed written testimony shows that the desire for secrecy of some companies sued by Uniloc was rooted in practical concerns about being targeted by other patent trolls.

As one representative wrote in a declaration, disclosing the entities’ name and how much it paid Uniloc would make it more likely that other patent trolls seeking quick payments would target them in the future.

“We agreed to settle this case and enter this Agreement not because of its merits but because of the high cost of defense and the risk of a trial to our small company,” the representative wrote. “Further legal attacks of this sort are an existential threat to our business and we do not wish to become the target of other Non-practicing entities.”

A representative from another entity forced to pay Uniloc echoed those concerns, writing that “other non-practicing entities would be encouraged by knowledge of [the company’s] settlement with non-practicing entity Uniloc to seek nuisance licenses from [the company] in the future.”

And another company’s representative wrote that “even being identified as a party to the Uniloc Document may result in [the company] being a target of future patent litigation.” Similarly, a different company’s representative wrote that disclosing its identity would make “it a target in future litigation campaigns by non-practicing entities.”

The documents belie the trade secrecy claims advanced by Uniloc and Apple, raising legitimate questions about whether they accurately characterized these companies’ concerns in seeking to keep these records secret. As the above quotes show, their concerns were largely centered on protecting their companies, especially small companies, from further patent trolling. Now we know why Uniloc fought so hard to keep these statements out of public sight.

Court praises EFF for its work to vindicate public access to court records

EFF has long fought to bring greater transparency to patent litigation, as well as supporting proposals to shed light on patent trolls. This transparency effort, however, took a number of twists, including Apple joining with Uniloc in avoiding transparency and a bad decision by the U.S. Court of Appeals for the Federal Circuit that appeared to give Uniloc an opportunity to maintain excessive secrecy.

So we were quite pleased when the district court stood up for the public’s right to access court records and required Uniloc to disclose a number of documents in redacted form (you can view them all here). And we were relieved when Uniloc complied with the court order requiring disclosure instead of challenging it yet again.

But we were also humbled by the court’s recognition of EFF’s years-long advocacy on behalf of the public’s right to understand what’s happening in federal courts.

“The Electronic Frontier Foundation has been of considerable assistance to the Court,” the judge wrote. “The real parties herein have jointly aligned themselves against the public interest and EFF has been of enormous help in keeping the system honest. This order recognizes that assistance and thanks EFF.”

EFF will continue to push back on secrecy claims in patent litigation and elsewhere to ensure that the public is able to access court records and understand how patent trolls misuse our legal system to threaten innovation.

Related Cases: Uniloc v. Apple
Aaron Mackey

Google Loses Appeal Against EU's Record Antitrust Fine, But Will Big Tech Ever Change?

4 days 8 hours ago

The EU continues to crack down on big tech companies with its full arsenal of antitrust rules. This month, Google lost its appeal against a record fine, now slightly trimmed to €4.13 billion, for abusing its dominant position through the tactics it used to keep traffic on Android devices flowing through to the Google search engine. The EU General Court largely upheld the EU Commission’s decision from 2018 that Google had imposed unlawful restrictions on manufacturers of Android mobile devices and mobile network operators in order to cement the dominance of its search engine.

Google's defeat comes as no surprise, as the vast majority of consumers in the EU use Google Search and have the Android operating system installed on their phones. The Court found that Google abused its dominant position by, for example, requiring mobile device manufacturers to pre-install Google Search and the Google Chrome browser in order to use Google’s app store. As a result, users got steered away from competing browsers and search engines, Google's search advertising revenue continued to flow unchallenged, and those revenues funded other anticompetitive and privacy-violating practices.

A High Price For Anti-Competitive Behavior: The EU's Digital Markets Act

The General Court ruling, which Google can still appeal to the EU Court of Justice, reiterates a message that is increasingly being voiced in political circles in Brussels: Anti-competitive behavior must come at a high price. The goal is to bring about a change in behavior among large technology companies that control key services such as search engines, social networks, operating systems, and online intermediary services. The recent adoption of the EU’s Digital Markets Act (DMA) is a prime example of this logic: it tackles anticompetitive practices of the tech sector and proposes sweeping pro-competition regulations with serious penalties for noncompliance. Under the DMA, the so-called “gatekeepers”, the largest platforms that control access to digital markets for other businesses, must comply with a list of do’s and don'ts, all designed to remove barriers companies face in competing with the tech giants. 

The DMA reflects the EU Commission’s experience with enforcing antitrust rules in the digital market. Some of the new requirements forbid app stores from conditioning access to the use of the platform’s own payment systems and ban forced single sign-ons. Other rules make it easier for users to freely choose their browser or search engine. The ruling by the General Court in the Google Android case will make it easier for the EU Commission to decide which gatekeepers and services will fall under the new rules and to hold them accountable. 

Will Big Tech Change? Better Tools and Investment Needed

Whether the DMA and confident enforcement actions will actually lead to more healthy competition on the internet remains to be seen. The practices targeted in this lawsuit and in the DMA are some of the most important ways that dominant tech firms raise structural barriers to potential competitors, but other barriers exist as well, including access to capital and programming talent. The success of the EU’s efforts will depend on whether enforcers have the tools to change company practices enough, and in a visible enough way, to encourage investment in new competitors.

Christoph Schmon

Automated License Plate Readers Threaten Abortion Access. Here's How Policymakers Can Mitigate the Risk

5 days 9 hours ago

Over the last decade, a vast number of law enforcement agencies around the country have adopted a mass surveillance technology that uses cameras to track the vehicles of every driver on the road, with little thought or respect given to the ways this technology might be abused. Now, in the wake of the U.S. Supreme Court's Dobbs ruling, that technology may soon be turned against people seeking abortions, the people who support them, and the workers who provide reproductive healthcare.

We're talking about automated license plate readers (ALPRs). These are camera systems that capture license plate numbers and upload the times, dates, and locations where the plates were seen to massive searchable databases. Sometimes these scans may also capture photos of the driver or passengers in a vehicle.

Sometimes these cameras are affixed to stationary locations. For example, if placed on the only roads in and out of a small town, a police department can monitor whenever someone enters or leaves the city limits. A law enforcement agency could install them at every intersection on major streets to track a person in real time whenever they pass a camera. 

Police can also attach ALPRs to their patrol cars, then capture all the cars they pass. In some cities police are taught to do "gridding," where they drive up and down every block of a neighborhood to capture data on what cars are parked where. There is also a private company called Digital Recognition Network that has its own contractors driving around, collecting plate data, and they sell that data to law enforcement.

For years, EFF and other organizations have tried to warn government officials that it was only a matter of time before this technology would be weaponized to target abortion seekers and providers. Unfortunately, few would listen, because it seemed unthinkable that Roe v. Wade could be overturned. That was clearly a mistake. Now cities and states that believe abortion access is a fundamental right must move swiftly and decisively to end or limit their ALPR programs.

How ALPR Data Might Be Used to Enforce Abortion Bans

ALPR technology has long been valued by law enforcement because of the lax restrictions on the data. 

Few states have enacted regulations and, consequently, law enforcement agencies collect as much data as possible on everyone, regardless of any connection to a crime, and store it for excessively long periods of time (a year or two years is  common). Law enforcement agencies typically do not require officers to get a warrant, demonstrate probable cause or reasonable suspicion, or show really much proof at all of a law enforcement interest before searching ALPR data. Meanwhile, as EFF has shown through hundreds of public records requests, it is the norm that agencies will share ALPR data they collect broadly with other agencies nationwide, without requiring any justification that the other agencies need unfettered access. Police have long argued that you don't have an expectation of privacy when driving on public streets, conveniently dodging how this data could be used to reveal private information about you, such as when you visit a reproductive health clinic.

That means there's very little to stop a determined police investigator from using either their own ALPR systems to enforce abortion bans or accessing the ALPR databases of another jurisdiction to do so. If a state or city wants to protect the right to seek an abortion, they must ensure that places that have criminalized abortion cannot access their data.

Here are a few examples of how this might play out:

Location Searches: Many ALPR software products, such as Motorola Solutions' Vigilant PlateSearch, offer a "Stakeout" feature, which an investigator can use to search for vehicles seen or regularly seen around a specific location. It would be relatively easy for an investigator to query the address of an abortion clinic to reveal the vehicles of patients, doctors, and others who visit a facility. Once obtained, those license plates could be used to reveal the person's identity through a DMV database. Or the license plates could be entered back into the system to reveal the travel patterns of those vehicles, including where they park at night or whether they crossed state lines. Remember, with so many agencies sharing data across state lines, an investigator in a pro-ban jurisdiction can easily query the data from an agency in a jurisdiction that supports abortion access.

Hot Lists: Most ALPR products used by law enforcement allow officers to create a "hot list," essentially a list of license plates that are under suspicion. Whenever a hot-listed plate is spotted by an ALPR, officers are alerted in real-time of its location. These hot lists are frequently shared across jurisdictions, so that police in one jurisdiction can intercept cars that have been flagged by another jurisdiction.

If a state were to create a registry of pregnant people, they could build a hot list of their license plates to track their movements. If a state has criminalized providing, assisting, or giving material support for out-of-state abortions, investigators could create a hot list of "abetter" vehicles. For example, they could scrape public medical licensing databases, retrieve information from an anti-abortion activism website that publishes dossiers on medical professionals, or infiltrate a private Facebook group to obtain the identifies of members providing resources to abortion seekers. Then they could query DMV databases to obtain the license plates of those individuals. With a hot list of those plates, ban-enforcement investigators would get an alert when a target has crossed into their state and can be intercepted for arrest.

While that might seem a bit far fetched, we would remind policy makers that overturning Roe also once seemed highly unlikely. These are threats we need to address before they become an everyday reality.

What Policy Makers Can Do About ALPR

Through EFF's Atlas of Surveillance project, we have identified nearly 1,000 law enforcement agencies using ALPRs, but we believe this to be a significant undercount. In California, which has taken a hardline stance in favor of abortion access, at least 260 agencies are using ALPRs.

Policymakers in states that support abortion access may be looking for easy solutions. The good news is there is one super easy and instant way to protect data: don't use ALPRs at all. A prosecutor bent on prosecuting abortions can't access your data if you don't collect it.

Unfortunately, few lawmakers have found the courage to take such a solid, strong stance for the privacy rights of their constituents when it comes to ALPRs. And so, we have compiled a few other mitigation methods that lawmakers and agencies can consider.

1. Forbid ALPR Data for Ban Enforcement. Government agencies should explicitly prohibit the use of their ALPR data for abortion ban enforcement, as the city of Nashville recently did. An agency that seeks to protect abortion access could even go so far as to declare using data for ban enforcement as a form of official "misuse," subject to penalties. Another approach is to limit ALPR use to only certain, very specific serious felonies. 

California state law also requires agencies to only use ALPR data in ways that are consistent with privacy and civil liberties. Since abortion access has long been a privacy right in California, agencies should already be doing this.

2. Limit Sharing with External Agencies. Governments should prohibit sharing with external agencies, especially agencies in other states, in order to protect abortion seekers crossing state lines and to protect providers in their state from being investigated by other states. EFF research has found that agencies will frequently give hundreds of other agencies across the country open access to their ALPR databases. Pro-choice municipalities in states with bans should also ensure their data is not being shared with neighboring law enforcement agencies.

An agency that wants to access ALPR data should be required to sign a binding agreement that it will not use data for abortion ban enforcement. Violations of this agreement should result in an agency's access being permanently revoked.

In California, it is illegal for agencies to share ALPR data out of state; nevertheless, many agencies are careless and do not vet the agencies they share with. EFF and the ACLU of Northern California successfully sued the Marin County Sheriff's Office on behalf of community activists over this very issue in a case settled earlier this year. 

On a similar note, law enforcement agencies should not accept hot lists from any agency that has not agreed—in writing—to prohibit the use of ALPR data for abortion ban enforcement. Otherwise, a law enforcement agency in a pro-choice jurisdiction risks alerting an anti-choice jurisdiction of the whereabouts of abortion seekers or reproductive health providers.

3. Reduce the Retention Period. Governments should reduce the retention period dramatically. Many agencies hold data for a year, two years, or even five years. There's really no reason for this. Agencies should consider taking New Hampshire's lead and reduce the retention period to three minutes, except for vehicles already connected to a non-abortion-related crime

4. No ALPRs Near Reproductive Health Facilities. Law enforcement agencies should not install ALPRs near reproductive health facilities. Agencies should either prohibit their officers from using patrol-vehicle mounted ALPRs to canvass areas around reproductive health facilities, or require them to turn ALPRs off when approaching an area with such a facility.

5. Mitigate the Risk of Third Party Hosting. Agencies should be aware of the risks when they store ALPR data with a cloud service provider. Investigators enforcing an abortion ban may go straight to the cloud service provider with legal process to access ALPR data when they think a pro-choice agency won't voluntarily provide it. Addressing this is complicated and will depend on the resources available to the law enforcement agency. At a minimum, an agency should implement sufficient encryption practices that only allow the intended user to access ALPR data and prevent third parties, such as vendor employees and other unauthorized parties, from accessing the data. One avenue to explore is locally hosting the ALPR data on servers controlled by the agency, or by a collaborative network of like-minded local agencies. However, agencies should be careful to ensure they are capable of implementing cybersecurity best practices and standards, including encryption and employing staff who are qualified to protect against ever-evolving security threats. Another option is to seek a cloud provider that offers end-to-end encryption, so that the company's employees can't access the encrypted data. This may result in an necessary tradeoff of some software features to protect targeted or vulnerable populations, such as abortion seekers.

6. Extra Scrutiny for External Requests for Assistance. Even if a law enforcement agency cuts off other agencies' direct access to ALPR data, they may still receive requests for assistance in investigations. Officials must scrutinize these requests closely, since the language used in the request may intentionally obfuscate the connection to an abortion ban. For example, what may be described as a kidnapping or attempted murder may actually be an attempt at abortion ban enforcement from a state with a fetal personhood law. Agencies can try to address this by requiring the requestor to attest that the investigation does not concern abortion.

7. Training. Agencies should ensure that reproductive rights are explicitly covered in all ALPR training (and, for that matter, all training regarding surveillance data). Agencies should not allow ALPR vendors to provide the training courses, since many of these companies sell their products (and the promise of interagency data sharing) to law enforcement agencies in abortion-ban jurisdictions.

8. Robust Audits. Agencies should already be conducting strong and thorough audits of ALPR systems, including data searches. These audits should include examining all searches for potential impacts on access to reproductive healthcare. No user should be able to access the system without documenting the reason and, when applicable, the case or incident number, for each search of an ALPR system or hot list addition.

Protecting ALPR-Adjacent Data 

In order for ALPR data to be useful, law enforcement agencies often must also access vehicle registration data or criminal justice information systems. 

Pro-choice government officials, particularly at state-level law enforcement agencies and DMVs, must take a hard look at databases that contain information on drivers and vehicles and how that data is shared out of state, and prohibit other states from accessing that data for abortion ban enforcement. If law enforcement in another state refuses to agree to such a restriction, they should no longer have direct access to the system. 

California has already done this in another context. Following the passage of the California Values Act, the California Attorney General defined accessing the statewide law enforcement database for immigration enforcement as misuse. This resulted in revocation of access from a subset of U.S. Immigration and Customs Enforcement that refused to sign an agreement agreeing to this restriction.

The Problem of Commercial ALPRs

Even if a law enforcement agency takes all these precautions, or shuts down its ALPR program, investigators in abortion ban states still have another avenue to obtain ALPR data: private databases.

For example, Digital Recognition Network (DRN Data), a subsidiary of Motorola Solutions, contracts with private drivers (often repossession companies) to collect ALPR data en masse in major cities around the country. If an officer in an abortion ban state wants to look at ALPR data in a state that guarantees abortion access, but can't connect to the official law enforcement databases, they can go to this commercial database to obtain information going back years.

What's worse is that private actors can also access this database. DRN sells access to ALPR data to private investigators, who only need to check a box saying that they're querying the data for litigation development. With the passage of SB 8 in Texas, private actors now have the ability to sue to enforce the state's abortion ban. Unfortunately, anti-abortion activists for years have been compiling their own databases of license plates of abortion providers; now they can use that to query private ALPR databases to surveil abortion seekers and reproductive healthcare providers.

This is a difficult problem to solve, since private ALPR operators have often made First Amendment arguments, asserting a right to photograph license plates and sell that information to subscribers. However, many law enforcement agencies—including major federal agencies—also subscribe to this data. A government agency that purports to support abortion access should consider ending its subscription, since it amounts to subsidizing a surveillance network that will one day, if not already, be used to persecute abortion seekers.

Preventing Predictable Threats

Lawmakers who support reproductive rights need to recognize that abortion access and mass surveillance are incompatible. Years of permitting unrestrained access to privacy-invasive technologies that allow police to collect sensitive data on everyone are the proverbial chickens coming home to roost.

Lawmakers in states like California first saw this happen with surveillance technology turned on immigrant communities. To their credit, they rushed to patch the systems, but they failed to look at the horizon to see what was coming next, such as the persecution of abortion seekers or families of youth seeking gender-affirming healthcare. 

Now these leaders must start undoing the dangerous surveillance systems they've facilitated. They must reject the collect-it-all claims from the law enforcement community that project public safety miracles without surfacing the potential harms. They must start writing future-looking policies for surveillance that anticipate and address the worst case scenarios.

While our guidance above specifically addresses abortion access, we acknowledge a major weakness. The strongest reforms are not piecemeal protections for whichever vulnerable group is under attack at the moment, but a complete overhaul that protects us all. 

Dave Maass

EFF Urges FTC to Address Security and Privacy Problems in Daycare and Early Education Apps

5 days 10 hours ago
An EFF study found the apps compromise young children’s data, and current laws don’t address the problem.

SAN FRANCISCO—The Federal Trade Commission must review the lack of privacy and security protections among daycare and early education apps, the Electronic Frontier Foundation (EFF) urged Wednesday in a letter to Chair Lina Khan.

Daycare and preschool applications frequently include notifications of feedings, diaper changes, pictures, activities, and which guardian picked up or dropped off the child—potentially useful features for overcoming separation anxiety of newly enrolled children and their anxious parents.

But EFF Director of Engineering Alexis Hancock’s recent investigation found early education and daycare apps have several troubling security risks. Some allow public access to children’s photos via insecure cloud storage; many have dangerously weak password policies; at least one (Tadpoles for Parents) sends “event” data, including when the app is activated and deactivated, to Facebook; and several enable cleartext traffic that can be exploited by network eavesdroppers.

“Parents find themselves in a bind: either enroll children at a daycare and be forced to share sensitive information with these apps, or don’t enroll them at all,” EFF’s letter to Khan said. “Paths for parents to opt a child out of data sharing are, with rare exception, completely absent.”

“Since parents do not have the tools or proper information to currently assess the privacy and security of their children’s data in daycare and early education apps, the Federal Trade Commission should review the current gaps in the law and assess potential paths to strengthen protections for young children’s data, or investigate other means to improve protections for children’s data in this context,” the letter concludes.

Of 42 daycare apps that privacy experts researched, 13 companies did not specify the data they collect in their privacy policies. In policies of those that do describe data collection processes, most admitted to sharing sensitive information (such as the average number of diaper changes per day) with third parties. Only 10 of the 42 apps stated in their privacy policies that they did not share data with third parties – but seven of those 10 actually were doing so anyway.

Current laws don’t address the problem. The Children’s Online Privacy Protection Act only applies to operators of online services “directed to” children under 13; early education and daycare apps, however, are used solely by adults like teachers. The Family Educational Rights and Privacy Act also falls short: It restricts schools from disclosing students’ “education records” to certain third parties without parental consent, but does not regulate the actions of third parties who may receive that data, such as daycare apps.

For EFF’s letter to Federal Trade Commission Chair Lina Khan:

For more on daycare apps’ privacy and security problems:

Contact:  AlexisHancockDirector of Engineering, Certbot WilliamBudingtonSenior Staff
Josh Richman

Google’s Perilous Plan for a Cloud Center in Saudi Arabia is an Irresponsible Threat to Human Rights

6 days 11 hours ago

On August 9, a Saudi woman was sentenced to 34 years in prison by the Kingdom of Saudi Arabia’s notorious specialized criminal court in Riyadh. Her crime? Having a Twitter account and following and retweeting dissidents and activists.

That same day, a federal jury in San Francisco convicted a former Twitter employee of money laundering and other charges for spying—on behalf of the kingdom—on Twitter users critical of the Saudi government.

These are just the latest examples of Saudi Arabia’s dismal track record of digital espionage, including infiltration of social media platforms, cyber surveillance, repression of public dissent, and censorship of those criticizing the government. Yet, against this backdrop of rampant repression and abusive surveillance, Google is moving ahead with plans to set up, in partnership with the state-owned company Saudi Aramco, a massive data center in Saudi Arabia for its cloud computing platform serving business customers.

These cloud data centers, which already exist in Jakarta, Tel Aviv, Berlin, Santiago, Chile, London, Los Angeles, and dozens of other cities around the world, are utilized by companies to run all aspects of their businesses. They store data, run databases, and provide IT for corporate human resources, customer service, legal, security, and communications departments.

As such, they can house reams of personal information on employees and customers, including personnel files, emails, confidential documents, and more. The Saudi-region cloud center is being developed “with a particular focus on businesses in the Kingdom,” Google said.

With Saudi Arabia’s poor human rights record, it’s difficult to see how or even if Google can ensure the privacy and security of people whose data will reside in this cloud. Saudi Arabia has proven time and again that it exploits access to private data to target activists, dissidents, and journalists, and will go to great lengths to illegally obtain information from US technology companies to identify, locate, and punish Saudi citizens who criticize government policies and the royal family.

Saudi agents infiltrated Twitter in 2014 and used their employee credentials to access information about individuals behind certain Twitter accounts critical of the government, including the account owners’ email addresses, phone numbers, IP addresses and dates of birth, according to the U.S. Department of Justice. The information is believed to have been used to identify a Saudi aid worker who was sentenced to 20 years in prison for allegedly using a satirical Twitter account to mock the government.

Meanwhile, a Citizen Lab investigation concluded with “high confidence” that in 2018, the mobile phone of a prominent Saudi activist based in Canada was infected with spyware that allows full access to chats, emails, photos, and device microphones and camera. And just last week, the wife of slain Saudi journalist Jamal Khashoggi announced that she is suing the NSO Group over alleged surveillance of her through Pegasus spyware. These are just a few examples of the Saudi government’s digital war on free expression.

Human rights and digital privacy rights advocates, including EFF, have called on Google to stop work on the data center until it has conducted a due diligence review about the human rights risks posed by the project, and outlined the type of government requests for data that are inconsistent with human rights norms and should be rejected by the company. Thirty-nine human rights and digital rights groups and individuals outlined four specific steps Google should take to work with rights groups in the region in evaluating the risks its plan imposes on potentially affected groups and develop standards for where it should host cloud services.

Google has said that an independent human rights assessment was conducted for the Saudi cloud center and steps were taken to address concerns, but it has not disclosed the assessment or any details about mitigation, such as what steps it is taking to ensure that Saudi agents can’t infiltrate the center the way they did Twitter, how personal data is being safeguarded against improper access, and whether it will stand up against government requests for user data that are legal under Saudi law but don’t comply with international human rights standards.

“The Saudi government has demonstrated time and again a flagrant disregard for human rights, both through its own direct actions against human rights defenders and its spying on corporate digital platforms to do the same,” the rights groups’ statement says. “We fear that in partnering with the Saudi government, Google will become complicit in future human rights violations affecting people in Saudi Arabia and the Middle East region.”

This isn’t the first time Google’s plans to do business with and profit from authoritarian governments has sparked outrage. In 2018, The Intercept revealed that Google was planning to release a censored version of its search engine service inside China. “Project Dragonfly” was a secretive plan to create a censored, trackable search tool for the Chinese government, raising a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online.

Google eventually backed down, telling Congress that it had terminated Project Dragonfly. Unfortunately, we have seen no signs that Google is reevaluating its plans for the Saudi cloud center, despite the overwhelming evidence that dropping such a trove of potentially sensitive personal data smack dab into a country that has no compunction about accessing, by any means, information so it can identify and punish its critics will almost certainly endanger not only activists but everyday people for merely expressing opinions. 

Indeed, in June company leadership at Alphabet, Google’s parent company, urged shareholders to reject a resolution that would require the company to publish a human rights impact assessment and a mitigation plan for data centers located in areas with significant human rights risks, including Saudi Arabia. It even asked the Securities and Exchange Commission to exclude the resolution from its 2022 proxy statement because, among other things, it has already implemented its essential elements.

But this was hardly the case. Specifically, Google has said it is committed to standards in the United Nations Guiding Principles on Business and Human Rights (UNGP) and the Global Network Initiative (GNI) when expanding into new locations. Those standards require “formal reporting” when severe human rights impacts exist as a result of business operations or operating contexts, transparency with the public, and independent assessment and evaluation of how human rights protections are being met.

Google has done the opposite—it’s claimed to have conducted a human rights assessment for the cloud center in Saudi Arabia and addressed “matters identified” in that review, but has issued no details and no public report.

The shareholder resolution was defeated at Alphabet’s annual meeting. The good news is that a majority (57.6%) of independent shareholders voted in favor of the resolution, demonstrating alignment with rights groups that want Google to do the right thing and show that it knows full well the risks this cloud center poses to human rights in the region by disclosing exactly how it plans to protect people in the face of a government hell-bent on punishing dissent.

If Google can’t live up to its human rights commitments and its claims to have “addressed matters” that literally endanger people’s lives and liberty—and we question whether it can—then it should back off of this perilous plan. EFF and a host of groups around the world and in the region will be watching. 



Karen Gullo

Ban Government Use of Face Recognition In the UK

1 week ago

In 2015, Leicestershire Police scanned the faces of 90,000 individuals at a music festival in the UK and checked these images against a database of people suspected of crimes across Europe. This was the first known deployment of Live Facial Recognition (LFR) at an outdoor public event in the UK. In the years since, the surveillance technology has been frequently used throughout the country with little government oversight and no electoral mandate. 

Face recognition presents an inherent threat to individual privacy, free expression, information security, and social justice. It has an egregious history of misidentifying people of color, leading for example to wrongful arrest, as well as failing to correctly identify trans and nonbinary people. Of course, even if overnight the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. 

EFF has spent the last few years advocating for a ban on government use of face recognition in the U.S.–and we’ve watched and helped as many municipalities have, including in our own backyard–but we’ve seen enough of its use in the UK as well. 

That’s why we are calling for a ban on government use of face recognition in the UK. We are not alone. London-based civil liberties group Big Brother Watch has been driving the fight to end government-use of face recognition across the country. Human rights organization Liberty brought the first judicial challenge against police use of live facial recognition, on the grounds that it breached the Human Rights Act 1998. The government’s own privacy regulator raised concerns about the technical bias of LFR technology, the use of watchlist images with uncertain provenance, and ways that the deployment of LFR evades compliance with data protection principles. And the first independent report commissioned by Scotland Yard challenged police use of LFR as lacking an explicit basis and found the technology 81% inaccurate. The independent Ryder Review also recommended the suspension of LFR in public places until further regulations are introduced.

What Is the UK’s Current Policy on Face Recognition? 

Make no mistake: Police forces across the UK, like police in the US, are using live face recognition. That means full-on Minority Report-style real-time attempts to match people’s faces as they walk on the street to databases of photographs, including suspect photos. 

Of the five forces that have used the technology in England and Wales, the silent rollout has been primarily driven by London’s Metropolitan Police (better known as the Met) and South Wales Police, which oversees the over-1-million-person metro area of Cardiff. The technology is often supplied by Japanese tech company NEC Corporation. It scans every face that walks past a camera and checks it against a watchlist of people suspected of crimes or who are court-involved. Successful matches have resulted in immediate arrests. Six police forces in the UK also use Retrospective Facial Recognition (RFR), which compares images obtained by a camera to a police database, but not in real-time. Police Scotland has reported its intention to introduce LFR by 2026. On the contrary, the Police Service of Northern Ireland apparently has not obtained or implemented face recognition to date.

Unfortunately, the expanding roll-out of this dangerous technology has evaded legislative scrutiny through Parliament. Police forces are unilaterally making the decisions, including whether to adopt LFR, and if so, what safeguards to implement. And earlier this year the UK Government rejected a House of Lords report calling for the introduction of regulations and mandatory training to counter the negative impact that the current deployment of surveillance technologies has on human rights and the rule of law. The evidence that the rules around face recognition need to change are there–many are just unwilling to see or do anything about it. 

Police use of facial recognition was subject to legal review in an August 2020 court case brought by a private citizen against South Wales Police. The Court of Appeal held that the force’s use of LFR was unlawful insofar it breached privacy rights, data protection laws, and equality legislation. In particular, the court found that the police had too much discretion in determining the location of video cameras and the composition of watchlists. 

In light of the ruling, the College of Policing published new guidance: images placed on databases should meet proportionality and necessity criteria, and police should only use LFR when other “less intrusive” methods are unsuitable. Likewise, the then-UK Information Commissioner, Elizabeth Denham, issued a formal opinion warning against law enforcement using LFR for reasons of efficiency and cost reduction alone. Guidance has also been issued on police using surveillance cameras, most notably the December 2020 Surveillance Camera Commissioner’s guidance for LFR, and the January 2022 Surveillance Camera Code of Practice for technology systems connected to surveillance cameras. But these do not provide coherent protections on the individual right to privacy.

London’s Met Police 

Across London, the Met Police uses LFR by bringing a van with mounted cameras to a public place, scanning faces of people walking past, and instantly matching those faces against the Police National Database (PND). 

Images on the PND are predominantly sourced from people who have been arrested, which includes many individuals that were never charged or were cleared of committing a crime. In 2019, the PND reportedly held around 20 million facial images. According to one report, 67 people requested that their images be removed from police databases; only 34 requests were accepted; and of those, 14 were declined and the remainder were pending. Yet the High Court informed the police in 2012 that the biometric details of innocent people were unlawfully held on the database. 

This means that once a person is arrested, even if they are cleared, they remain a “digital suspect” having their face searched again and again by LFR. This violation of privacy rights is exacerbated by data sharing between police forces. For example, a 2019 police report detailed how the Met and British Transport Police shared images of seven people with the King’s Cross Estate for a secret use of face recognition between 2016 and 2018.

Between 2016 and 2019, the Met deployed LFR 12 times across London. The first came at Notting Hill Carnival in 2016–the UK’s biggest African-Caribbean celebration. One person was a false positive. Similarly, at Notting Hill Carnival in 2017, two people were falsely matched and another individual was correctly matched but was no longer wanted. Big Brother Watch reported that at the 2017 Carnival, LFR cameras were mounted on a van behind an iron sheet, thus making it a semi-covert deployment. Face recognition software has been proven to misidentify ethnic minorities, young people, and women at higher rates. And reports of deployments in spaces like Notting Hill Carnival–where the majority of attendees are Black–exacerbate concerns about the inherent bias of face recognition technologies and the ways that government use amplifies police powers and aggravates racial disparities.

 After suspending deployments during the COVID-19 pandemic, the force has since resumed its use of LFR across central London. On 28 January 2022–one day after the UK Government relaxed mask wearing requirements–the Met deployed LFR with a watchlist of 9,756 people. Four people were arrested, including one who was misidentified and another who was flagged on outdated information. Similarly, a 14 July 2022 deployment outside Oxford Street tube station reportedly scanned around 15,600 people’s data and resulted in four “true alerts” and three arrests. The Met has previously admitted to deploying LFR in busy areas to scan as many people as possible, despite face recognition data being prone to error. This can implicate people for crimes they haven’t committed. 

The Met also recently purchased significant amounts of face recognition technology for Retrospective Facial Recognition (RFR) to use alongside its existing LFR system. In August 2021, the Mayor of London’s office approved a proposal permitting the Met to expand its RFR technology as part of a four-year deal with NEC Corporation worth £3,084,000. And whilst LFR is not currently deployed through CCTV cameras, RFR compares images from national custody databases with already-captured images from CCTV cameras, mobile phones, and social media. The Met’s expansion into RFR will enable the force to tap into London’s extensive CCTV network to obtain facial images–with almost one million CCTV cameras in the capital. According to one 2020 report, London is the third most-surveilled city in the world, with over 620,000 cameras. Another report claims that between 2011 and 2022, the number of CCTV cameras more than doubled across the London Boroughs. 

While David Tucker, head of crime at the College of Policing, said RFR will be used “overtly,” he acknowledged that the public will not receive advance notice if an undefined “critical threat” is declared. Cameras are getting more powerful and technology is rapidly improving. And in sourcing images from more than one million cameras, face recognition data is easy for law enforcement to collect and hard for members of the public to avoid. 

South Wales Police

South Wales Police were the first force to deploy LFR in the UK. They have reportedly used the surveillance technology more frequently than the Met, with a June 2020 report revealing more than 70 deployments. Two of these led to the August 2020 court case discussed above. In response to the Court of Appeal’s ruling, South Wales Police published a briefing note claiming that it also used RFR to process 8,501 images between 2017 and 2019 and identified 1,921 individuals suspected of committing a crime in the process. 

South Wales Police have primarily deployed their two flagship facial recognition projects, LOCATE and IDENTIFY, at peaceful protests and sporting events. LOCATE was first deployed in June 2017 during UEFA Champions League Final week and led to the first arrest using LFR, alongside 2,297 false positives from 2,470 ‘potential matches’. Similarly, IDENTIFY was launched in August 2017 but utilizes the Custody Images Database and allows officers to retrospectively search CCTV stills or other media to identify suspects.

South Wales Police also deployed LFR during peaceful protests at an arms fair in March 2018. The force convened a watchlist of 508 individuals from its custody database that were wanted for arrest and a further six people that were “involved in disorder at the previous event.” No arrests were made. Similar trends are evident in the United States where face recognition has been used to target people engaging in protected speech, such as deployments at protests surrounding the death of Freddie Gray. Free speech and the right to protest are essential civil liberties and government use of face recognition at these events discourages free speech, harms entire communities, and violates individual freedoms. 

In 2018 the UN Special Rapporteur on the right to privacy criticized the Welsh police’s use of LFR as unnecessary and disproportionate, and urged the government and police to implement privacy assessments prior to deployment to offset violations on privacy rights. The force maintains that it is “absolutely convinced that Facial Recognition is a force for good in policing in protecting the public and preventing harm.” This is despite face recognition getting worse as the number of people in the database increases as when the likelihood of similar faces increases, matching accuracy decreases. 

The Global Perspectives 

Previous legislative initiatives in the UK have fallen off the policy agenda, and calls from inside Parliament to suspend LFR pending legislative review have been ignored. In contrast, European policymakers have advocated for an end to government use of the technology. The European Parliament recently voted overwhelmingly in favor of a non-binding resolution calling for a ban on police use of facial recognition technology in public places. In April 2021, the European DataProtection Supervisor called for a ban on the use of AI for automated recognition of human features in publicly accessible spaces as part of the European Commission’s legislative proposal for an Artificial Intelligence Act. Likewise, in January 2021 the Council of Europe called for strict regulation of the tech and noted in their new guidelines that face recognition technologies should be banned when used to solely determine a person’s skin color, religious or other belief, sex, racial or ethnic origin, age, health, or social status. Civil liberties groups have also called on the EU to ban biometric surveillance on the grounds of inconsistencies with EU human rights.

The United States Congress continues to debate ways of regulating government use of face surveillance. Also, U.S. states and municipalities have taken it upon themselves to restrict or outright ban police use of face recognition technology. Cities across the United States, large and small, have stood up to this invasive technology by passing local ordinances banning its use. If the UK passes strong FRT rules, they would be an example for governments around the world including the United States.

Next Steps

Face recognition is a dangerous technology that harms privacy, racial justice, free expression, and information security. And the UK’s silent rollout has facilitated unregulated government surveillance of this personal biometric data. Please join us in demanding a ban on government use of face recognition in the UK. Together, we can end this threat.

Paige Collings

Study of Electronic Monitoring Smartphone Apps Confirms Advocates’ Concerns of Privacy Harms

1 week 3 days ago

Researchers at the University of Washington and Harvard Law School recently published a groundbreaking study analyzing the technical capabilities of 16 electronic monitoring (EM) smartphone apps used as “alternatives” to criminal and civil detention. The study, billed as the “first systematic analysis of the electronic monitoring apps ecosystem,” confirmed many advocates’ fears that EM apps allow access to wide swaths of information, often contain third-party trackers, and are frequently unreliable. The study also raises further questions about the lack of transparency involved in the EM app ecosystem, despite local, state, and federal government agencies’ increasing reliance on these apps.

As of 2020, over 2.3 million people in the United States were incarcerated, and an additional 4.5 million were under some form of “community supervision,” including those on probation, parole, pretrial release, or in the juvenile or immigration detention systems. While EM in the form of ankle monitors has long been used by agencies as an “alternative” to detention, local, state, and federal government agencies have increasingly been turning to smartphone apps to fill this function. The way it works is simple: in lieu of incarceration/detention or an ankle monitor, a person agrees to download an EM app on their own phone that allows the agency to track the person’s location and may require the person to submit to additional conditions such as check-ins involving face or voice recognition. The low costs associated with requiring a person to use their own device for EM likely explains the explosion of EM apps in recent years. Although there is no accurate count of the total number of people who use an EM app as an alternative to detention, in the immigration context alone, today nearly 100,000 people are on EM through the BI Smartlink app, up from just over 12,000 in 2018. Such a high usage calls for a greater need for public understanding of these apps and the information they collect, retain, and share.

Technical Analysis

The study’s technical analysis, the first of its kind for these types of apps, identified several categories of problems with the 16 apps surveyed. These include privacy issues related to the permissions these apps request (and often require), concerns around the types of third-party libraries and trackers they use, who they send data to and how they do it, as well as some fundamental issues around usability and app malfunctions.


When an app wants to collect data from your phone, e.g. by taking a picture with your camera or capturing your GPS location, it must first request permission from you to interact with that part of your device. Because of this, knowing which permissions an app requests gives a good idea for what data it can collect. And while denying unnecessary requests for permission is a great way to protect your personal data, people under EM orders often don’t have that luxury, and some EM apps simply won’t function until all permissions are granted.

Perhaps unsurprisingly, almost all of the apps in the study request permissions like GPS location, camera, and microphone access, which are likely used for various check-ins with the person’s EM supervisor. But some apps request more unusual permissions. Two of the studied apps request access to the phone’s contacts list, which the authors note can be combined with the “read phone state” permission to monitor who someone talks to and how often they talk. And three more request “activity recognition” permissions, which report if the user is in a vehicle, on a bicycle, running, or standing still.

Third-Party Libraries & Trackers

App developers almost never write every line of code that goes into their software, instead depending on so-called “libraries” of software written by third-party developers. That an app includes these third-party libraries is hardly a red flag by itself. However, because some libraries are written to collect and upload tracking data about a user, it’s possible to correlate their existence in an app with intent to track, and even monetize, user data.

The study found that nearly every app used a Google analytics library of some sort. As EFF has previously argued, Google Analytics may not be particularly invasive if it were only used in a single app, but when combined with its nearly ubiquitous use across the web, it provides Google with a panoptic view of individuals’ online behavior. Worse yet, the app Sprokit “appeared to contain the code necessary for Google AdMob and Facebook Ads SDK to serve ads.” If that is indeed the case, Sprokit’s developers are engaging in an appalling practice of monetizing their captive audience.

Information Flows

The study aimed to capture the kinds of network traffic these apps sent during normal operation, but was limited by not having active accounts for any of the apps (either because the researchers could not create their own accounts or did not do so to avoid agreeing to terms of service). Even still, by installing software that allows them to snoop on app communications, they were able to draw some worrying conclusions on a few studied apps.

Nearly half of the apps made requests to web domains that could be uniquely associated with the app. This is important because even though those web requests are encrypted, the domain they were addressed to is not, meaning that whoever controls the network a user is on (e.g. coffee shops, airports, schools, employers, Airbnb hosts, etc) could theoretically know if someone is under EM. One app which we’ve already mentioned, Sprokit, was particularly egregious with how often it sent data: every five minutes, it would phone home to Facebook’s ad network endpoint with numerous data points harvested from phone sensors and other sensitive data.

It’s worth reiterating that, due to the limitations of the study, this is far from an exhaustive picture of each EM app’s behavior. There are still a number of important open questions about what data they send and how they send it.

App Bugs and Technical Issues

As with any software, EM apps are prone to bugs. But unlike other apps, if someone under EM has issues with their app, they’re liable to violate the terms of their court order, which could result in disciplinary action or even incarceration—issues that those who’ve been subjected to ankle monitors have similarly faced.

To study how bugs and other issues with EM apps affected the people forced to use them, the researchers performed a qualitative analysis of the apps’ Google Play store reviews. These reviews were, by a large margin, overwhelmingly negative. Many users report being unable to successfully check-in with the app, sometimes due to buggy GPS/facial recognition, and other times due to not receiving notifications for a check-in. One user describes such an issue in their review: “I’ve been having trouble with the check-ins not alerting my phone which causes my probation officer to call and threaten to file a warrant for my arrest because I missed the check-ins, which is incredibly frustrating and distressing.”

Privacy Policies

As many people who use online services and mobile apps are aware, before you can use a service you often have to agree to a lengthy privacy policy. And whether or not you’ve actually read it, you and your data are bound by its terms if you choose to agree. People who are under EM, however, don’t get a say in the matter: the terms of their supervision are what they’ve agreed to with a prosecutor or court, and often those terms will force them to agree to an EM app’s privacy policy.

And some of those policies include some heinous terms. For example, while almost all of the apps’ privacy policies contained language about sharing data with law enforcement to comply with a warrant, they also state reasons they’d share that data without a warrant. Several apps mention that data will be used for marketing. One app, BI SmartLINK, even used to have conditions which allowed the app’s developers to share “virtually any information collected through the application, even beyond the scope of the monitoring plan.” After these conditions were called out in a publication by Just Futures Law and Mijente, the privacy policy was taken down.

Legal Issues 

The study also addressed the legal context in which issues around EM arise. Ultimately, legal challenges to EM apps are likely to be difficult because although the touchstone of the Fourth Amendment’s prohibition against unlawful search and seizure is “reasonableness,” courts have long held that probationers and parolees have diminished expectations of privacy compared to the government’s interests in preventing recidivism and reintegrating probationers and parolees into the community.

Moreover, the government likely would be able to get around Fourth Amendment challenges by claiming that the person consented to the EM app. But as we’ve argued in other contexts, so-called “consent searches” are a legal fiction. They often occur in high-coercion settings, such as traffic stops or home searches, and leave little room for the average person to feel comfortable saying no. Similarly, here, the choice to submit to an EM app is hardly a choice at all, especially when faced with incarceration as a potential alternative.

Outstanding Questions

This study is the first comprehensive analysis into the ecosystem of EM apps, and lays crucial groundwork for the public’s understanding of these apps and their harms. It also raises additional questions that EM app developers and government agencies that contract with these apps must provide answers for, including:

  • Why EM apps request dangerous permissions that seem to be unrelated to typical electronic monitoring needs, such as access to a phone’s contacts or precise phone state information
  • What developers of EM apps that lack privacy policies do with the data they collect
  • What protections people under EM have against warrantless search of their personal data by law enforcement, or from advertising data brokers buying their data
  • What additional information will be uncovered by being able to establish an active account with these EM apps
  • What information is actually provided about the technical capabilities of EM apps to both government agencies contracting with EM app vendors and people who are on EM apps 

The people who are forced to deal with EM apps deserve answers to these questions, and so does the general public as the adoption of electronic monitoring grows in our criminal and civil systems.

Saira Hussain

San Francisco’s Board of Supervisors Grants Police More Surveillance Powers

1 week 4 days ago

In a 4-7 vote, San Francisco’s Board of Supervisors passed a 15-month pilot program granting the San Francisco Police Department (SFPD) more live surveillance powers. This was despite the objections of a diverse coalition of community groups and civil rights organizations, residents, the Bar Association of San Francisco, and even members of the city’s Police Commission, a civilian oversight body comprising of mayoral and Board appointees. The ordinance, backed by the Mayor and the SFPD, enables the SFPD to access live video streams from private non-city cameras for the purposes of investigating crimes, including misdemeanor and property crimes. Once the SFPD gets access, they can continue live streaming for 24 hours. The ordinance authorizes such access by consent of the camera owner or a court order.

Make no mistake, misdemeanors like vandalism or jaywalking happen on nearly every street of San Francisco on any given day—meaning that this ordinance essentially gives the SFPD the ability to put the entire city under live surveillance indefinitely.

This troubling ordinance also allows police to surveil “significant events,” loosely defined as large or high-profile events, “for placement of police personnel.” This essentially gives police a green light to monitor—in real-time—protests and other First Amendment-protected activities, so long as they require barricades or street closures associated with public gatherings. The SFPD has previously been caught using these very same cameras to surveil protests following George Floyd’s murder, and the SF Pride Parade, facts that went unaddressed by the majority of Supervisors who authorized the ordinance.

The Amendments

During the hearing, Supervisor Hillary Ronen introduced two key amendments to address and mitigate the ordinance’s civil liberties impacts. The first would have prohibited the SFPD from live monitoring public gatherings unless there was imminent threat of death or bodily harm. This failed, by the same 4-7 tally as the ordinance itself.

The second, which was successful, required stronger reporting requirements on SFPD’s use of live surveillance and the appointment of an independent auditor on the efficacy of the pilot program. This amendment was needed to ensure that an independent entity rather than the SFPD itself, assesses the pilot program’s data to determine exactly how, when, and why these new live monitoring powers were used.

What’s This All About?

What is this all about? During the hearing, several of the Supervisors talked about how San Franciscans are worried about crime, but failed to articulate how giving police live monitoring abilities addresses those fears.

And in fact, many of the examples that both the SFPD and the Supervisors who voted for this ordinance pointed to are the types of situations where live surveillance would not help. Some Supervisors pointed to retail theft or car break-ins as examples of why live surveillance is needed. But under the ordinance, an officer would need to first seek permission from an SFPD captain and then go to a camera owner to request access to live surveillance—steps that would take far longer than the seconds or minutes for these incidents to occur. And if police have reason to believe a crime is about to occur at a particular location, it makes far more sense to send an officer rather than go through the process of getting permission to live monitor a camera, which carries the risk of putting an intersection or a pharmacy under constant police surveillance for no reason.

Moreover, as Supervisor Shamann Walton pointed out, police have always been able to get historical footage of crimes simply by sending a request to the camera’s owner—this is especially true of the thousands of Business Improvement District/Commercial Benefit District cameras from which police have long been obtaining historic footage to build cases or gather evidence. So other than a desire to actively watch large swaths of the city, it’s unclear how live monitoring helps police get anything they couldn’t already get by sending a simple request after the fact.

Which leaves us to the sad conclusion that this ordinance isn’t really about the safety of San Franciscans—it’s about security theater. It’s about putting voters at ease that something, anything is being done about crime—even if that proactive move has no discernible effect on crime and, in fact, actively threatens to harm San Francisco’s activists and most vulnerable populations.

A Heartfelt Thank You

A very large coalition pushed against this ordinance. Without their efforts and the efforts of many other other San Franciscans who weighed in during public comment, the 15-month sunset date for the pilot or the independent audit provision would not have been possible.

Commendations should also be heaped upon Supervisors Chan, Preston, Ronen, and Walton for their brave stand at the Board of Supervisors meeting, their sharp critique and questioning of the legislation, and their willingness to listen to concerned community members.

Watching the Watchers

Because this bill has a sunset provision that requires it to be renewed 15 months from now, we have another chance to put on our boots, dust off our megaphones, and fight like hell to protect San Franciscans from police overreach. In the meantime, and along with our coalition, we’ll be monitoring for violations and tracking the data that the SFPD produces. And we’ll be there in 15 months to hopefully prevent the reauthorization of this dangerous ordinance. 

Related Cases: Williams v. San Francisco
Matthew Guariglia

Lawsuit: SMUD and Sacramento Police Violate State Law and Utility Customers’ Privacy by Sharing Data Without a Warrant

1 week 4 days ago
The public power utility and police racially profiled Asian communities in the illegal data-sharing scheme.

SACRAMENTO—The Sacramento Municipal Utility District (SMUD) searches entire zip codes’ worth of people’s private data and discloses it to police without a warrant or any suspicion of wrongdoing, according to a privacy lawsuit filed Wednesday in Sacramento County Superior Court.

SMUD’s bulk disclosure of customer utility data turns its entire customer base into potential leads for police to chase and has particularly targeted Asian homeowners, says the lawsuit filed by the Electronic Frontier Foundation (EFF) and law firm Vallejo, Antolin, Agarwal, and Kanter LLP on behalf of plaintiffs the Asian American Liberation Network, a Sacramento-based nonprofit, and Khurshid Khoja, an Asian American Sacramento resident, SMUD customer, cannabis industry attorney, and cannabis rights advocate. 

“SMUD’s policies claim that ‘privacy is fundamental’ and that it ‘strictly enforces privacy safeguards,’ but in reality, its standard practice has been to hand over its extensive trove of customer data whenever police request it,” said EFF Staff Attorney Saira Hussain. “Doing so violates utility customers’ privacy rights under state law and the California Constitution while disproportionately subjecting Asian and Asian American communities to police scrutiny.”

Utility data has historically provided a detailed picture of what occurs within a home. The advent of smart utility meters has only enhanced that image. Smart meters provide usage information in increments of 15 minutes or less; this granular information is beamed wirelessly to the utility several times each day and can be stored in the utility’s databases for years. As that data accumulates over time, it can provide inferences about private daily routines such as what devices are being used, when they are in use, and how this changes over time.

The California Public Utilities Code says public utilities generally “shall not share, disclose, or otherwise make accessible to any third party a customer’s electrical consumption data ....” except “as required under federal or state law.” The California Public Records Act prohibits public utilities from disclosing consumer data, except “[u]pon court order or the request of a law enforcement agency relative to an ongoing investigation.” 

“Privacy, not discrimination, was what SMUD promised when it rolled out smart meters,” said Monty Agarwal, EFF’s co-counsel at Vallejo, Antolin, Agarwal, and Kanter LLP.

Yet SMUD in recent years has given protected customer data to the Sacramento Police Department, who asked for it on an ongoing basis—without a warrant or any other court order, nor any suspicion of a particular resident—to find possible illicit cannabis grows. The program has been highly lucrative for the city: Sacramento Police in 2017 began issuing large penalties to owners of properties where cannabis is found under a new city ordinance, and levied nearly $100 million in fines in just two years. 

About 86 percent of those penalties were levied upon people of Asian descent. The lawsuit alleges that officials intentionally designed their mass surveillance to have this disparate impact on Asian communities. The complaint details how a SMUD analyst who provided data to police excluded homes in a predominantly white neighborhood, as well as how one police architect of Sacramento’s program removed non-Asian names on a SMUD list and sent only Asian-sounding names onward for further investigation.  

“SMUD and the Sacramento Police Department’s mass surveillance program is unlawful, advances harmful stereotypes, and overwhelmingly impacts Asian communities,” said Megan Sapigao, co-executive director of the Asian American Liberation Network. “It’s unacceptable that two public agencies would carelessly flout state law and utility customers’ privacy rights, and even more unacceptable that they targeted a specific community in doing so.”

“California voters rejected discriminatory enforcement of cannabis laws in 2016, while the Sacramento Police Department and SMUD conduct illegal dragnets through utility customer data to continue these abuses to this day,” Khoja said. “This must stop.”

For the complaint:

Contact:  SairaHussainStaff AaronMackeySenior Staff
Josh Richman

How to Ditch Facebook Without Losing Your Friends (Or Family, Customers or Communities)

2 weeks ago

Today, we launch “How to Ditch Facebook Without Losing Your Friends” - a narrated slideshow and essay explaining how Facebook locks in its users, how interoperability can free them, and what it would feel like to use an “interoperable Facebook” of the future, such as the one contemplated by the US ACCESS Act. Privacy info. This embed will serve content from

Watch the video on the Internet Archive

Watch the video on Youtube

Millions of Facebook users claim to hate the service - its moderation, both high-handed and lax, its surveillance, its unfair treatment of the contractors who patrol it and the publishers who fill it with content - but they keep on using it.

Both Facebook and its critics have an explanation for this seeming paradox: people use Facebook even though they don’t like it because it’s so compelling. For some critics, this is proof that Facebook has perfected an “addictive technology” with techniques like “dopamine loops.” Facebook is rather fond of this critique, as it integrates neatly with Facebook’s pitch to advertisers: “We are so good at manipulating our users that we can help you sell anything.”

We think there’s a different explanation: disgruntled Facebook users keep using the service because they don’t want to leave behind their friends, family, communities and customers. Facebook’s own executives share this belief, as is revealed by internal memos in which those execs plot to raise “switching costs” for disloyal users who quit the service.

“Switching costs” are the economists’ term for everything you have to give up when you switch products or services. Giving up your printer might cost you all the ink you’ve bulk-purchased; switching mobile phone OSes might cost you the apps and media you paid for. 

The switching cost of leaving Facebook is losing touch with the people who stay behind. Because Facebook locks its messaging and communities inside a “walled garden” that can only be accessed by users who are logged into Facebook, leaving Facebook means leaving behind the people who matter to you (hypothetically, you could organize all of them to leave, too, but then you run into a “collective action problem” - another economists’ term describing the high cost of getting everyone to agree to a single course of action).

That’s where interoperability comes in. Laws like the US ACCESS Act and the European Digital Markets Act (DMA) aim to force the largest tech companies to allow smaller rivals to plug into them, so their users can exchange messages with the individuals and communities they’re connected to on Facebook - without using Facebook.

“How to Ditch Facebook Without Losing Your Friends” explains the rationale behind these proposals - and offers a tour of what it would be like to use a federated, interoperable Facebook, from setting up your account to protecting your privacy and taking control of your own community’s moderation policies, overriding the limits and permissions that Facebook has unilaterally imposed on its users.

You can get the presentation as a full video, or a highlight reel, or a PDF or web-page. We hope this user manual for an imaginary product will stimulate your own imagination and give you the impetus to demand - or make - something better than our current top-heavy, monopoly-dominated internet.

Cory Doctorow

Giving Big Corporations “Closed Generic” Top-Level Domain Names to Run as Private Kingdoms Is Still a Bad Idea

2 weeks 3 days ago

No business can own the generic word for the product it sells. We would find it preposterous if a single airline claimed exclusive use of the word “air,” or a broadband service tried to stop its rivals from using the word “broadband.” Until this year, it seemed settled that the internet’s top-level domain names (like .com, .org, and so on) would follow the same obvious rule. Alas, ICANN (the California nonprofit that governs the global domain name system) seems intent on taking domains in a more absurd direction by revisiting the thoroughly discredited concept of “closed generics.”

In a nutshell, closed generics are top-level domain names using common words, like “.car.” But unlike other TLDs like “.com,” a closed generic TLD is under the control of a single company, and that company controls all of the domain names within the TLD. This is a terrible idea, for all of the same reasons it has failed twice already. And for one additional reason—defenders of open competition and free expression should not have to fight the same battle a third time.

Closed Generics Rejected and Then Resurrected

The context of this fight is the “new generic top-level domains” process, which expanded the list of “gTLDs” from the original six (.com, .net, .org, .edu, .gov, and .mil) to the 1,400 or so in use today, like .hot, .house, and .horse. In 2012, during the first round of applications to operate new gTLDs, some companies asked for complete, exclusive control over domains like .baby, .blog, .book, .cars, .food, .mail, .movie, .music, .news, .shop, and .video, plus similar terms written in Chinese characters. Most of the applicants were among the largest players in their industries (like Amazon for .book and Johnson & Johnson for .baby).

The outcry was fierce, and ICANN was flooded with public comments. Representatives of domain name registrars, small businesses, non-commercial internet users, and even Microsoft urged ICANN to deny these applications.

Fortunately, ICANN heeded the public’s wishes, telling the applicants that they could operate these top-level domains only if they allowed others to register their own names within those domains. Amazon would not be the sole owner of .book, and Google would not control .map as its private fiefdom. (Some TLDs that are non-generic brand names like .honda, .hermes, and .hyatt were given to the companies that own those brands as their exclusive domains, and some like .pharmacy are restricted to a particular kind of business . . . but not one business.)

A working group within the ICANN community continued to debate the “closed generics” issue, but the working group’s final report in 2020 made no recommendation. Both the supporters and opponents of closed generics tried to find some middle ground, but there was none to be found that protected competition and prevented monopolization of basic words.

That’s where things sat until early this year, when the Chairman of the ICANN Board, out of the blue, asked two bodies who don’t normally make policy to conduct a “dialogue” on closed generics: the ICANN GNSO Council (which oversees community policymaking for generic TLDs) and the ICANN Government Advisory Committee (a group of government representatives which as its name indicates, only “advises”). The Board hasn’t voted on the issue, so it’s not clear how many members actually support moving forward.

The Board’s letter was followed up a few days later by a paper from ICANN’s paid staff. It claimed to be a “framing paper” on the proposed dialogue. But in reality, the paper presented a slanted and one-sided history of the issue, suggesting incorrectly that closed generics were “implicitly” allowed under previous ICANN policies. The notion of “implicit” policy is anathema to a body whose legitimacy depends on open, transparent, and participatory decision-making. What’s more, the ICANN staff paper gives no weight to a huge precedent – one of ICANN’s largest waves of global public input, which was almost unanimously opposed to closed generics.

As the ICANN Board (or at least some of its members) try to start a “dialogue” that would keep the closed generics proposal alive, the staff paper went even further and tried to pre-determine the outcome of that dialogue, by suggesting that some closed generic domains would have to be allowed, as long as lawyers for the massive companies that seek to control those domains could come up with convincing “public interest goals.”

As a result, the land rush for new private kingdoms at the highest level of the internet’s domain name system appears poised to begin again.

Still a Bad, Pro-Monopoly Idea

The problems with giving control of every possible domain name within a generic top-level domain to a single company are the same as they were in 2012 and in 2020.

First, it’s out of step with trademark law. In the US and most countries, businesses can’t register a trademark in the generic term for that kind of business. That’s why a computer company and a record label can get trademarks in the name “Apple,” but a fruit company cannot. Some trademark attorneys in the ICANN community have suggested that the US Supreme Court’s decision in the case means that trademarks in generic words are now fair game, but that’s misleading. The Supreme Court ruled that adding “.com” to a generic word might result in a valid trademark—but the applicant still has to show with evidence that the public associates that domain name with a particular business, not a general category. And that’s still difficult and rare. If trademark law doesn’t allow companies to “own” generic words, as part of a domain name or otherwise, then ICANN shouldn’t be giving a single company what amounts to ownership over those words as top-level domains.

Second, closed generics are bad policy because they give an unfair advantage to businesses that are already large and often dominant in their field. Control of a new gTLD doesn’t come cheap—the application fee alone is several hundred thousand dollars, and ongoing fees to ICANN are also high. Allowing a bookstore owner named Garcia to run a website at is a powerful tool for building a new independent business with its own online identity. A business with a memorable, descriptive domain name like is less dependent on its placement in Google’s search results, or Facebook’s news feed. If, instead, only Amazon could create websites that ended in .book, the small businesses of the world would lose that competitive boost, and the image of Amazon as the only online bookseller would be even more durable.

Third, closed generics would blast a big hole in the pro-competitive firewall at the heart of ICANN: the rule that registries (the wholesalers like Verisign who operate top-level domains) and registrars (the retailers like Namecheap who register names for internet users) must remain separate. That rule dates from ICANN’s founding in 1998, and was designed to break a monopoly over domain names. The structural separation rule, which is relatively easy to enforce, helps stop new monopolists from arising in the domain name business. Exclusive control over a generic top-level domain would mean that single companies would act as the registry and the sole registrar for a top-level domain.

The Public Doesn’t Need Closed Generics, and “Public Interest” Promises Don’t Work in ICANN-Land

The ICANN Board’s letter shared the GAC’s 2013 suggestion that closed generics should be allowed if they could be structured to “serve the public interest.” But which “public” might that be? There’s no reason why giving full control of a generic TLD to a single company would serve internet users better than a domain that’s open to all (or at least all members of a particular business or profession). The justifications we’ve seen boil down to arguing that someone, somewhere will come up with an innovative use for a closed generic domain. That simply begs the question, while not explaining how exclusive control is a necessary feature.

On top of that, ICANN does not have a good track record of holding domain registries to the “public interest” promises they make—its enforcement mechanism is slow, cumbersome, and tends to embroil ICANN in content moderation issues, which is something the organization is rightfully forbidden to do.

No More Sequels

Over the decade-plus of ICANN’s project to expand the top-level domains, no company has been allowed to operate a generic TLD as its private kingdom. And despite two rounds of heated debate, the community has not come up with a plan for doing this well or fairly.

It’s time to stop.

The only motive behind the continuing push for “compromise” on the closed generics issue is the wealthiest players’ desire to control the internet’s basic resources. ICANN should put its foot down at last, put the closed generics idea on the shelf, and leave it there.

Mitch Stoltz


2 weeks 4 days ago

Puzzlemaster Aaron Steimle of the Muppet Liberation Front contributed to this post.

Every year, EFF joins thousands of computer security professionals, tinkerers, and hobbyists for Hacker Summer Camp, the affectionate term used for the series of Las Vegas technology conferences including BSidesLV, Black Hat, DEF CON, and more. EFF has a long history of standing with online creators and security researchers at events like these for the benefit of all tech users. We’re proud to honor this community’s spirit of curiosity, so each year at DEF CON we unveil a limited edition EFF member t-shirt with an integrated puzzle for our supporters (check the archive!). This year we had help from some special friends.

"The stars at night are big and bright down on the strip of Vegas"

For EFF’s lucky 13th member t-shirt at DEF CON 30, we had the opportunity to collaborate with iconic hacker artist Eddie the Y3t1 Mize and the esteemed multi-year winners of EFF’s t-shirt puzzle challenge: Elegin, CryptoK, Detective 6, and jabberw0nky of the Muppet Liberation Front.

Extremely Online members' design with an integrated challenge.

The result is our tongue-in-cheek Extremely Online T-Shirt, an expression of our love for the internet and the people who make it great. In the end, one digital freedom supporter solved the final puzzle and stood victorious. Congratulations and cheers to our champion cr4mb0!

But How Did They Do It?

Take a guided tour through each piece of the challenge with our intrepid puzzlemasters from the Muppet Liberation Front. Extreme spoilers ahead! You’ve been warned…


Puzzle 0

The puzzle starts with the red letters on the shirt on top of a red cube. Trying common encodings won’t work, but a quick Google search of the letters will return various results containing InterPlanetary File System (IPFS) links. The cube is also the logo for IPFS. Thus, the text on the shirt resolves to the following IPFS hash/address:


QR codes have a standard format and structure that requires the large squares to be placed in three of the four corners. With this in mind, the image can be seen as four separate smaller squares, with the two middle ones overlapping at the large square in the center. These squares can be reconstructed into a valid QR code using an image editing program.


Resolves to

This site contains two groups of text: the first paragraph contains four lines that start with the same letters, and the second paragraph looks like Base64-encoded information. Notice that the four lines in the first paragraph all start with the same letters as the text on the shirt. These are also IPFS addresses of the remaining puzzles.

Puzzle 1


Wordle players will immediately recognize the style of the puzzle. You can use a wordlist and some regular expressions / pattern matching to identify the only possible solution to this puzzle. Note that the first five words also act as a hint to the theme of each puzzle answer: space/stars.


Puzzle 2 Challenge Text

Word on the street is that the font of youth is the key.

[Flight enabling bird feature.] + [Short resonant tones, often indicating a correct response.] + [First Fermat Prime]

55rhyykkqisq 4ubhYpYfwg 5pYrmmkks6qi prkuy6qlf eakjZjk4a rhXkgwy6iqhrddb

This puzzle consists of some cryptic clues and a line of ciphertext. First, consider the wording of the initial line: “Word on the street is that the font of youth is the key.” These clues should indicate that the solver will need to look into Microsoft Word Fonts.

Next, to decode the clues in the second line:

  1. Flight enabling bird feature = WING
  2. Short resonant tones, often indicating a correct response = DINGS
  3. First Fermat Prime = 3


Decoding the Cipher Text

55rhyykkqisq 4ubhYpYfwg 5pYrmmkks6qi prkuy6qlf eakjZjk4a rhXkgwy6iqhrddb

The solver now knows that the ciphertext has something to do with Microsoft Word and the Wingdings 3 font. Typed out in Wingdings 3 font, each character results in some type of arrow. The characters are categorized as arrows as follows:

UP: XYhpr5
DOWN: iqs60
LEFT: Zbdftv
RIGHT: aceguw4

Using these arrows as instructions to a pen, one can draw shapes that resemble letters. Each word of the ciphertext should map to a single letter, with a new plot starting after each space.


Reading the drawn shapes as letters – the solution: MIMOSA

Puzzle 3

Puzzle solution:

"The name of the game isn’t Craps" and the picture of a person snapping their fingers are references to the game "Snaps." The puzzle uses the rules of Snaps transferred onto a Craps board. Snaps is a game where a clue-giver uses statements and finger-snapping to spell out a well-known name.

Looking at the differences between the given board and a standard Craps board indicates which components are meant to give clues. In a game of Snaps, vowels are indicated by the number of snaps, translated here as the number of pips shown on the colored die. Consonants are indicated with the first letter of a statement given by the clue-giver. On this board, "COME," "NOT PASS BAR," "PASS LINE," and "HOW TO PLAY" have been added or altered, indicating that these statements give the necessary consonants C, N, P, and H by taking the first letter of each statement, as in the game Snaps. The dice have been colored, giving the numbers 1-4 which in Snaps indicate the vowels A, E, I, and O. To order these elements, the rainbow circles to the left of the dice have been colored with the corresponding colors, giving the answer PHOENICIA.

Final answer: PHOENICIA

Puzzle 4

Puzzle Solution:

Unlike the previous puzzles, this image does not take up the entire page, indicating that there might be more information available by inspecting the html. Doing so shows that the embedded image has the file name "OrangeJuicePaperFakeBook.jpg." Deconstructing this, "OrangeJuicePaper" clues the word "pulp" and "FakeBook" clues the word fiction, letting the solver know the puzzle's theme will revolve around the movie Pulp Fiction.

The image itself is hiding information steganographically, and the information can be extracted using the tool steghide. Using steghide on OrangeJuicePaperFakeBook.jpg with no password will write the file QuartDeLivreAvecDuFromage.txt, containing a long series of binary strings of length 8.

'Quart de livre avec du fromage' is 'quarter pounder with cheese' in French. "Do you know what they call a quarter pounder with cheese in Paris?" is a quote from Vincent Vega in Pulp Fiction.

The binary numbers within the file are the ASCII representation of letters and spaces, and can be converted using any of the many tools available upon searching for "binary ASCII converter." Converting the file contents gives legible but nonsensical results:

overconstructed efficiencyapartments coeffect jeffs counterefforts phosphatidylethanolamines eye effed I nonefficient aftereffects theocracy teachereffectiveness inefficaciousnesses a ineffervescibility psychoneuroimmunologically superefficiency coefficientofacceleration o toxic jeffersonian teffs differentialcoefficient milkshake propulsiveefficiency effulges bad lockpick effed upper nonrevolutionaries revolutionarinesses teffs temperaturecoefficient maleffect effable foe butterflyeffect eerie tranquillizing magnetoopticaleffect jeffs plantthermalefficiency nulls rappers I effectiveresistance

These words aren't used directly, but instead the length of each word is relevant. Converting each word to its character count, and then converting that character count to its letter of the alphabet gives: othenyceallitsarzoyaelewithcheersevigcoentevegas

"They call it a royale with cheese" is another quote from Vincent Vega, also the answer to the previous quote ("Do you know what they call a quarter pounder with cheese in Paris?").

Looking at othenyceallitsarzoyaelewithcheersevigcoentevegas, it contains "they call it a royale with cheese," followed by "vigcent vega." The extra characters mixed in spell 'ones zeroes,' which is a hint that each of the nonsensical words should be converted to a one or a zero themselves. But how? Looking back at the original image, it shows that the EFF score is 1 and the DEF CON score is 0—so represent each word containing the letters "EFF" with a 1, and all other words with a 0. This gives a new binary string, which can itself be again converted to ASCII, giving the ciphertext ymgdzq.

Going back to the quote derived from counting the number of characters in each word, note that Vincent was intentionally misspelled as Vigcent. This is a clue to use a vigenere cipher to decrypt this new ciphertext with key vega.

Applying Vigenere to text 'ymgdzq' with key 'vega' gives the solution: DIADEM

Bonus Easter Egg: The first character of each non-eff word in the wordlist results in: opeitapotmblunrfetnri, which anagrams to muppet liberation front.


The final block of text is encoded in Base64. Decoding it reveals that the data starts with "Salted__", an artifact of encrypting using OpenSSL.

Concatenate the answers from the four previous puzzles in alphabetical order to create the passphrase that will be used to decrypt the text. With the block of text placed in a file called final.enc, the openssl command to decrypt the text is as follows:

$ openssl aes-256-cbc -d -in final.enc -out final.txt
enter aes-256-cbc decryption password: DiademMimosaPeacockPhoenicia

Decrypting it reveals the solution to the puzzle:

"On behalf of EFF and Muppet Liberation Front,

congratulations on solving the puzzle challenge!

Email the phrase 'The stars at night are big and bright down on the strip of Vegas' to"


EFF is deeply thankful to the Muppet Liberation Front members for creating this puzzle and Eddie the Y3t1 for designing the artwork. After all, how can we fight for a better digital future without some beauty and brainteasers along the way? The movement for digital rights depends on cooperation and mutual support in our communities, and EFF is grateful to everyone on the team!

Happy Hacking!

Aaron Jue

It’s Time For A Federal Anti-SLAPP Law To Protect Online Speakers

2 weeks 4 days ago

Our country’s fair and independent courts exist to resolve serious disputes. Unfortunately, some parties abuse the civil litigation process to silence others’ speech, rather than resolve legitimate claims. These types of censorious lawsuits have been dubbed Strategic Lawsuits Against Public Participation, or SLAPPs, and they have been on the rise over the past few decades. 

Plaintiffs who bring SLAPPs intend to use the high cost of litigation to harass, intimidate, and silence critics who are speaking out against them. A deep-pocketed plaintiff who files a SLAPP doesn’t need to win the case on the merits—by putting financial pressure on a defendant, along with the stress and time it takes to defend a case, they can take away a person’s free speech rights. 

Fortunately, a bill introduced in Congress today, the SLAPP Protection Act of 2022 (H.R. 8864), aims to deter vexatious plaintiffs from filing these types of lawsuits in federal court.



To stop lawsuits that are meant to harass people into silence, we need strong anti-SLAPP laws. When people get hit with a lawsuit because they’re speaking out on a matter of public concern, effective anti-SLAPP law allows for a quick review by a judge. If it’s determined that the case is a SLAPP, the lawsuit gets thrown out, and the SLAPP victim can recover their legal fees. 

In recent years, more states have passed new anti-SLAPP laws or strengthened existing ones.  Those state protections are effective against state court litigation, but they don’t protect people who are sued in federal court. 

Now, a bill has been introduced that would make real progress in stopping SLAPPs in federal courts. The SLAPP Protection Act will provide strong protections to nearly all speakers who are discussing issues of public concern. The SLAPP Protection Act also creates a process that will allow most SLAPP victims in federal court to get their legal fees paid by the people who bring the SLAPP suits. (Here’s our blog post and letter supporting the last federal anti-SLAPP bill that was introduced, more than seven years ago.) 

“Wealthy and powerful corporate entities are dragging citizens through meritless and costly litigation, to expose anyone who dares to stand up to them to financial and personal ruin,” said bill sponsor Rep. Jamie Raskin (D-MD) at a hearing yesterday in which he announced the bill. 

SLAPPs All Around 

SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples: 

Coal Ash Company Sued Environmental Activists

In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income around $8,000—were sued for $30 million by a Georgia-based company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, that said things like the landfill “affected our everyday life,” and “You can’t walk outside, and you can not breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist group. 

Shiva Ayyadurai Sued A Tech Blog That Reported On Him

In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick fought back in court and his reporting remains online, but the legal fees had a big effect on his business. 

Logging Company Sued Greenpeace 

In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.) 

Pipeline Company Sued Environmental Activists

In 2017, Greenpeace, Rainforest Action, the Sierra Club, and other environmental groups were sued by Energy Transfer Partner because they opposed the Dakota Access Pipeline project. Energy Transfer said that the activists’ tweets, among other communications, amounted to a “fraudulent scheme” and that the oil company should be able to sue them under RICO anti-racketeering laws, which were meant to take on organized crime. 

Congressman Sued His Twitter Critics 

In 2019, anonymous Twitter accounts were sued by Rep. Devin Nunes, then a Congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles @DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to the case, but Virginia’s lack of an anti-SLAPP law has enticed many plaintiffs there. 

The Same Congressman Sued Media Outlets For Reporting On Him

Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper the Fresno Bee, and NBC. 

Fast Relief From SLAPPs

The SLAPP Protection Act meets EFF's criteria for a strong anti-SLAPP law. It would be a powerful tool for defendants hit with a federal lawsuit meant to take away their free speech rights. If the bill passes, any defendant sued for speaking out on a matter of public concern would be allowed to file a special motion to dismiss, which will be decided within 90 days. If the court grants the speaker’s motion, the claims are dismissed. In many situations, speakers who prevail on an anti-SLAPP motion will be entitled to their legal fees. 

The bill won’t reduce protections under state anti-SLAPP laws, either. So in cases where the state law may be as good, or even stronger, the current bill will become a floor, not a ceiling, for the rights of SLAPP defendants. 

EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws enhance the rights of all. We hope Congress passes the SLAPP Protection Act soon. 



Joe Mullin

Members of Congress Urge FTC to Investigate Fog Data Science

2 weeks 4 days ago

In the week since EFF and the Associated Press exposed how Fog Data Science purchases geolocation data on hundreds of millions of digital devices in the United States, and maps them for easy-to-use and cheap mass surveillance by police, elected officials have voiced serious concerns about this dangerous tech.

In a strong letter to Lina Khan, the chair of the Federal Trade Commission (FTC), Rep. Anna Eshoo of California on Tuesday criticized the “significant Fourth Amendment search and seizure concerns” raised by Fog and urged the FTC to investigate fully. As public records obtained by EFF show, police often use Fog’s mass surveillance tools without a warrant, in violation of our Fourth Amendment rights.

Eshoo wrote:

“The use of Fog is also seemingly incompatible with protections against unlawful search and seizure guaranteed by the Fourth Amendment. Consumers do not realize that they are potentially nullifying their Fourth Amendment rights when they download and use free apps on their phones. It would be hard to imagine consumers consenting to this if actually given the option, yet this is functionally what occurs.”

Eshoo also pointed out the new threat that Fog’s surveillance tool poses to people seeking reproductive healthcare. In a state where abortion has been criminalized, Fog’s Reveal tool could potentially allow police, without a warrant, to draw a geofence around a health clinic over state lines in a state where abortion is not criminalized, allowing them to see if any phones there return to their state. “In a post Roe v. Wade world., it’s more important than ever to be highly mindful of how tools like Fog Reveal may present new threats as states across the country pass increasingly draconian bills restricting people’s access to abortion services and targeting people seeking reproductive healthcare,” Eshoo wrote.

The FTC recently sued another company selling geolocation data, Kochava, a commendable step to hold the company accountable for its unfair practices.

Eshoo is not alone. Senator Ron Wyden said in a tweet about Fog’s ability to facilitate mass surveillance, “Unfortunately, while it’s outrageous that data brokers are selling location data to law-enforcement agencies, it’s not surprising.”

We echo Eshoo’s request that the FTC conduct a full and thorough investigation into Fog Data Science. We continue to urge Congress to act quickly to regulate this out-of-control industry that jeopardizes our privacy, and allows police to conduct warrantless mass surveillance.  

Matthew Guariglia

The Fight to Overturn FOSTA, an Unconstitutional Internet Censorship Law, Continues

2 weeks 4 days ago

More than four years after its enactment, FOSTA remains an unconstitutional law that broadly censored the internet and harmed sex workers and others by chilling their ability to speak, organize, and access information online.

And the fight to overturn FOSTA continues. Last week, two human rights organizations, a digital library, a sex worker activist, and a certified massage therapist filed their opening brief in a case that seeks to strike down the law for its many constitutional violations.

Their brief explains to a federal appellate court why FOSTA is a direct regulation of people’s speech that also censors online intermediaries that so many rely upon to speak—classic First Amendment violations. The brief also details how FOSTA has harmed the plaintiffs, sex workers, and allies seeking to decriminalize the work and make it safer, primarily because of its vague terms and its conflation of sex work with coercive trafficking.

“FOSTA created a predictable speech-suppressing ratchet leading to ‘self-censorship of constitutionally protected material’ on a massive scale,” the plaintiffs, Woodhull Freedom Foundation, Human Rights Watch, The Internet Archive, Alex Andrews, and Eric Koszyk, argue. “Websites that support sex workers by providing health-related information or safety tips could be liable for promoting or facilitating prostitution, while those that assist or make prostitution easier—i.e., ‘facilitate’ it—by advocating for decriminalization are now uncertain of their own legality.”

FOSTA created new civil and criminal liability for anyone who “owns, manages, or operates an interactive computer service” and creates content (or hosts third-party content) with the intent to “promote or facilitate the prostitution of another person.” The law also expands criminal and civil liability to classify any online speaker or platform that allegedly assists, supports, or facilitates sex trafficking as though they themselves were participating “in a venture” with individuals directly engaged in sex trafficking.

FOSTA doesn’t just seek to hold platforms and hosts criminally responsible for the actions of sex-traffickers. It also introduces significant exceptions to the civil immunity provisions of one of the internet’s most important laws, 47 U.S.C. § 230. These exceptions create new state law criminal and civil liability for online platforms based on whether their users' speech might be seen as promoting or facilitating prostitution, or as assisting, supporting or facilitating sex trafficking.

The plaintiffs are not alone in viewing FOSTA as an overbroad censorship law that has harmed sex workers and other online speakers. Four friend-of-the-court briefs filed in support of their case this week underscore FOSTA’s disastrous consequences. 

The Center for Democracy & Technology’s brief argues that FOSTA negated the First Amendment’s protections for online intermediaries and thus undercut the vital role those services provide by hosting a broad and diverse array of users’ speech online.

“Although Congress may have only intended the laudable goal of halting sex trafficking, it went too far: chilling constitutionally protected speech and prompting online platforms to shut down users’ political advocacy and suppress communications having nothing to do with sex trafficking for fear of liability,” CDT’s brief argues.

A brief from the Transgender Law Center describes how FOSTA’s breadth has directly harmed lesbian, gay, transgender, and queer people.

“Although FOSTA’s text may not name gender or sexual orientation, FOSTA’s regulation of speech furthers the profiling and policing of LGBTQ people, particularly TGNC people, as the statute’s censorial effect has resulted in the removal of speech created by LGBTQ people and discussions of sexuality and gender identity,” the brief argues. “The overbroad censorship resulting from FOSTA has resulted in real and substantial harm to LGBTQ people’s First Amendment rights as well as economic harm to LGBTQ people and communities.”

Two different coalitions of sex worker advocacy and harm reduction groups filed briefs in support of the plaintiffs that show FOSTA’s direct impact on sex workers and how the law’s conflation of consensual sex work with coercive trafficking has harmed both victims of trafficking and sex workers.

A brief led by Call Off Your Old Tired Ethics (COYOTE) of Rhode Island published data from its recent survey of sex workers showing that FOSTA has made sex trafficking more prevalent and harder to combat.

“Every kind of sex worker, including trafficking survivors, have been impacted by FOSTA precisely because its broad terms fail to distinguish between different types of sex work and trafficking,” the brief argues. The brief goes on to argue that FOSTA’s First Amendment problems have “made sex work more dangerous by curtailing the ability to screen clients on trusted online databases, also known as blacklists.”

A brief led by Decriminalize Sex Work shows that “FOSTA is part of a legacy of federal and state laws that have wrongfully conflated human trafficking and adult consensual sex work while overlooking the realities of each.”

“The limitations on free speech caused by FOSTA have essentially censored harm reduction and safety information sharing, removed tools that sex workers used to keep themselves and others safe, and interrupted organizing and legislative endeavors to make policies that will enhance the wellbeing of sex workers and trafficking survivors alike,” the brief argues. “Each of these effects has had a devastating impact on already marginalized and vulnerable communities; meanwhile, FOSTA has not addressed nor redressed any of the issues cited as motivation for its enactment.”

The plaintiffs’ appeal marks the second time the case has gone up to the U.S. Court of Appeals for the District of Columbia. The plaintiffs previously prevailed in the appellate court when it ruled in 2020 that they had the legal right, known as standing, to challenge FOSTA, reversing an earlier district court ruling.

Members of Congress have also been concerned about FOSTA’s broad impacts, with senators introducing the SAFE SEX Workers Study Act for the last two years, though it has not become law.

The plaintiffs are represented by Davis, Wright Tremaine LLP, Walters Law Group, Daphne Keller, and EFF.

Related Cases: Woodhull Freedom Foundation et al. v. United States
Aaron Mackey

San Francisco Police Must End Irresponsible Relationship with the Northern California Fusion Center

2 weeks 4 days ago

In yet another failure to follow the rules, the San Francisco Police Department is collaborating with the regional fusion center with nothing in writing—no agreements, no contracts, nothing— governing the relationship, according to new records released to EFF in its ongoing complaint against the agency.

This means that there is no document in place that establishes the limits and responsibilities for sharing and handling criminal justice data or intelligence between SFPD and the fusion center and other law enforcement agencies who access sensitive information through its network.

SFPD must withdraw immediately from any cooperation with the Northern California Regional Information Center (NCRIC). Any moment longer it continues to collaborate with NCRIC puts sensitive data and the civil rights of Bay Area residents at severe risk.

Fusion centers were started in the wake of 9/11 as part of a Department of Homeland Security program to improve data sharing between local, state, tribal, and federal law enforcement agencies. There are 79 fusion centers across the United States, each with slightly different missions and responsibilities, ranging from generating open-source intelligence reports to monitoring camera networks. NCRIC historically has served as the Bay Area hub for sharing data across agencies from automated license plate readers (ALPRs), face recognition, social media monitoring, drone operations, and "Suspicious Activity Reports" (SARS).

NCRIC requires all participating agencies to sign a data sharing agreement and non-disclosure agreement ("Safeguarding Sensitive But Unclassified Information"), which is consistent with federal guidelines for operating a fusion center. EFF has independently confirmed with NCRIC staff that SFPD has not signed such an agreement. This failure is even more surprising considering that SFPD has had two liaisons assigned to the fusion center and the police chief has served as chair of NCRIC's executive board.

In December 2020, EFF filed a public records request under the San Francisco Sunshine Ordinance, following a San Francisco Chronicle report suggesting that an SFPD officer had submitted a photo of a suspect to the fusion center's email list and received in response a match generated by face recognition, which would potentially violate San Francisco's face recognition ban. We sought records related to this particular case, but more generally, we sought communications related to other requests for photo identification submitted by SFPD, communications about face recognition, and any agreements between SFPD and NCRIC.

When SFPD failed to comply with our records request, we filed a complaint with the San Francisco Sunshine Ordinance Task Force, the citizen body assigned to oversee violations of open records and meetings laws. Many new documents were released and SFPD was found by the task force to have violated both the Sunshine Ordinance and the California Public Records Act. One document was missing though: the fusion center agreement.

New records released in the complaint now explain why: no such agreements exist. SFPD didn't sign any, according to multiple emails sent between staff.

SFPD can't simply solve this problem by signing the boilerplate agreement tomorrow. Any formal partnership or data-sharing relationship with NCRIC would have to go through the process required by the city's surveillance oversight ordinance, which requires public input into such agreements and the board of supervisors’ approval. SFPD should expect public opposition to its involvement with the fusion center, just as there was opposition to its involvement in the FBI's Joint Terrorism Task Force.

Even if that process were to move forward, the public must be involved in crafting the exact language of the agreement. For example, when the Bay Area Rapid Transit (BART) Police Department pursued an agreement with NCRIC, the grassroots advocacy group Oakland Privacy (an Electronic Frontier Alliance member) helped negotiate an agreement with stronger considerations for civil liberties and privacy.

This isn't the first time SFPD has played fast and loose with data regulations. EFF is currently suing the department for accessing a live camera network to spy on protesters without first following the process required by the surveillance oversight ordinance. EFF has also filed a second Sunshine Ordinance complaint against SFPD for failing to produce a mandated ALPR report in response to a public records request.

This latest episode re-emphasizes that SFPD has not earned the trust of the people when it comes to its use of technology and data. SFPD should be cut off from NCRIC immediately, and the Board of Supervisors should treat any claim about accountability from SFPD with skepticism. SFPD has proven it doesn't believe rules matter, and that should always be a deal-breaker when it comes to surveillance. 

Related Cases: Williams v. San Francisco
Dave Maass

EFF’s “Cover Your Tracks” Will Detect Your Use of iOS 16’s Lockdown Mode

3 weeks ago

Apple’s new iOS 16 offers a powerful tool for its most vulnerable users. Lockdown Mode reduces the avenues attackers have to hack into users’ phones by disabling certain often-exploited features. While providing a solid defense against intrusion, it is also trivial to detect that this new feature is enabled on a device. Our web fingerprinting tool Cover Your Tracks has incorporated detection of Lockdown Mode and alerts the user that we’ve determined they have this mode enabled.

Over the last few years, journalists, human rights defenders, and activists have increasingly become targets of sophisticated hacking campaigns. With a small cost to usability, at-risk populations can protect themselves from commonly used entry points into their devices. One such entry point is downloading remote fonts when visiting a webpage. iOS 16 in Lockdown Mode disallows remote fonts from being loaded from the web, which would otherwise have the potential to allow access to a device by exploiting the complex ways fonts are rendered. However, it is also easy to use a small piece of JavaScript code on the page to determine whether the font was blocked from being loaded.

While a large win for endpoint security, this is also a small loss for privacy. Lockdown Mode is unlikely to be used by many people, compared to the millions who use iOS devices, and as such it makes those that do enable it stand out amongst the crowd as a person who needs extra protection. Web fingerprinting is a powerful technique to determine a user's browsing habits, circumventing normal mechanisms users have to avoid tracking, such as clearing cookies.

Make no mistake: Apple’s introduction of this powerful new protection is a welcome development for those that need it the most. But users should also be aware of the information they are exposing to the web while using this feature.

Bill Budington

U.S. Federal Employees Can Take A Stand for Digital Freedoms

3 weeks 3 days ago

It’s that time of the year again when the weather starts to cool down and the leaves start to turn all different shades and colors. More importantly, it is also time for U.S. federal employees to pledge their support for digital freedoms through the Combined Federal Campaign (CFC)!

The pledge period for the CFC is underway and EFF needs your help. Last year, U.S. federal employees raised over $34,000 for EFF through the CFC, helping us fight for free expression, privacy, and innovation on the internet so that we can help create a better digital future.

The Combined Federal Campaign is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.6 billion for local, national, and international charities. This year’s campaign runs from September 1 to January 14, 2023. Be sure to make your pledge for the Electronic Frontier Foundation before the campaign ends!

U.S. federal employees and retirees can give to EFF by going to and clicking the DONATE button to give via payroll deduction, credit/debit, or an e-check! If you have a renewing pledge, you can increase your support as well. Be sure to use EFF’s CFC ID #10437. Scan the QR code below to easily make a pledge!

This year’s CFC campaign theme continues to build off of 2020’s “You Can Be The Face of Change.” U.S. federal employees and retirees give through the CFC to change the world for the better, together. With your support, EFF can continue our strides towards a diverse, and free internet that benefits all of its users.

With support from those who pledged EFF last year we have: rang alarm bells about a police equipment vendor’s now-thwarted plan to arm drones with tasers in response to school shootings, pushed back against government involvement in content moderation on social media platforms, and developed numerous digital security guides applicable for those seeking and offering abortion resources after the overturning of federal protections for reproductive rights.

Federal employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF today by using our CFC ID #10437 when you make a pledge!

Christian Romero
38 minutes 31 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed