EFF to Third Circuit: Electronic Device Searches at the Border Require a Warrant

6 hours 29 minutes ago

EFF, along with the national ACLU and the ACLU affiliates in Pennsylvania, Delaware, and New Jersey, filed an amicus brief in the U.S. Court of Appeals for the Third Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Roggio, involves a man who had been under ongoing criminal investigation for illegal exports when he returned to the United States from an international trip via JFK airport. Border officers used the opportunity to bypass the Fourth Amendment’s warrant requirement when they seized several of his electronic devices (laptop, tablet, cell phone, and flash drive) and conducted forensic searches of them. As the district court explained, “investigative agents had a case coordination meeting and border search authority was discussed in early January 2017,” before Mr. Roggio traveled internationally in February 2017.

The district court denied Mr. Roggio’s motion to suppress the emails and other data obtained from the warrantless searches of his devices. He was subsequently convicted of illegally exporting gun manufacturing parts to Iraq (he was also charged in a superseding indictment with torture and also convicted of that).

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2025, U.S. Customs and Border Protection (CBP) conducted 55,318 device searches, both manual (“basic”) and forensic (“advanced”).

While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. Border officers have access to forensic tools that help gain access to data on a locked or encrypted device they have physical access to. From public reporting, we know that more recent devices (and ones that have had the latest security updates applied) are more resistant to these type of tools, especially if they are turned off or turned on but not yet unlocked.

The U.S. Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country. But a traveler’s privacy interests in their suitcase and its contents are minimal compared to those in all the personal data on the person’s phone or laptop.

In our amicus brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones, laptops and other electronic devices are, of course, the same as those considered in Riley. Modern devices, over a decade later, contain even more data that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data.

Second, digital contraband (such as child sexual abuse material) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.

Finally, searching devices for evidence of contraband smuggling (for example, the emails here revealing details of the illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution. Therefore, emails or other data found on a digital device searched without a warrant at the border cannot and should not be used as evidence in court.

If the Third Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband.

This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband—that is, some set of already known facts pointing to this possibility—while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

We hope that the Third Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

Sophia Cope

The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People

8 hours 49 minutes ago

The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.

There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.

Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector. 

The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.” 

The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded. 

But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it.  And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.

Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.

EFF has, and always will, fight for real and sustainable protections for our civil liberties including  a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state. 

Matthew Guariglia

EFF to Supreme Court: Shut Down Unconstitutional Geofence Searches

14 hours 30 minutes ago
Digital Dragnets Violate Fourth Amendment, Brief Argues

WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.

The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent. 

Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time. 

The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant  compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society. 

"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world." 

The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way. 

"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."

For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief

Tags: geofence warrantsContact:  AndrewCrockerSurveillance Litigation Directorandrew@eff.org
Hudson Hongo

EFF to Court: Don’t Make Embedding Illegal

1 day 6 hours ago

Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.

The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.

But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.

The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.

Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.

Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.

Related Cases: Emmerich Newspapers v. Particle Media
Corynne McSherry

National Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’

1 day 14 hours ago
MIT Press Publishes EFF Executive Director’s Book As She Prepares to Depart Organization After 25 Years

SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour

In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.  

The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day. 

The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.  

Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20. 

The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco. 

Proceeds from sales of the book benefit EFF. 

“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election 

“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change 

Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record 

For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/ 

For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party  

For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender 

Contact:  KarenGulloSenior Writer for Free Speech and Privacykaren@eff.org
Josh Richman

Victory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data

5 days ago

In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.

The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.

The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.

In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.

It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.

Saira Hussain

☺️ Trust Us With Your Face | EFFector 38.4

6 days 12 hours ago

Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.

Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.4 - ☺️ Trust Us With Your Face

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!

Christian Romero

How to Pick Your Password Manager

6 days 12 hours ago

Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.

Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.

In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials ​​unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.

There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.

Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.

Jacob Hoffman-Andrews

Tech Companies Shouldn’t Be Bullied Into Doing Surveillance

1 week ago

The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”

Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.

In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here

Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.  

Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons. 

Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.

Matthew Guariglia

EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects

1 week 5 days ago

We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.

LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.

It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.

Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.

EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong  advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.

Samantha Baldwin

EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea

2 weeks ago

Update, February 25, 2026: In response to widespread pushback, Wisconsin lawmakers have removed the provision banning VPN services from S.B. 130 / A.B. 105. The bill now awaits Governor Tony Evers’ signature. While the removal of the VPN provision is a positive step, EFF continues to oppose the bill. Advocates and residents across Wisconsin are urged to maintain pressure and encourage Governor Evers to veto the bill.

Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea.

It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow.

In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill.

You can read the full letter here.

The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit.

As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability. 

The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach.

If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet.

As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.

Rindala Alajaji

San Jose Can Protect Immigrants by Ending Flock Surveillance System

2 weeks ago

(This appeared as an op-ed published February 12, 2026 in the San Jose Spotlight, written by Huy Tran (SIREN), Jeffrey Wang (CAIR-SFBA), and Jennifer Pinsof.)

As ICE and other federal agencies continue their assault on civil liberties, local leaders are stepping up to protect their communities. This includes pushing back against automated license plate readers, or ALPRs, which are tools of mass surveillance that can be weaponized against immigrants, political dissidents and other targets.

In recent weeks, Mountain View, Los Altos Hills, Santa Cruz, East Palo Alto and Santa Clara County have begun reconsidering their ALPR programs. San Jose should join them. This dangerous technology poses an unacceptable risk to the safety of immigrants and other vulnerable populations.

ALPRs are marketed to promote public safety. But their utility is debatable and they come with significant drawbacks. They don’t just track “criminals.” They track everyone, all the time. Your vehicle’s movements can reveal where you work, worship and obtain medical care. ALPR vendors like Flock Safety put the location information of millions of drivers into databases, allowing anyone with access to instantly reconstruct the public’s movements.

But “anyone with access” is far broader than just local police. Some California law enforcement agencies have used ALPR networks to run searches related to immigration enforcement. In other situations, purported issues with the system’s software have enabled federal agencies to directly access California ALPR data. This is despite the promises of ALPR vendors and clear legal prohibitions.

Communities are saying enough is enough. Just last week, police in Mountain View decided to turn off all of the city’s Flock cameras, following revelations that federal and other unauthorized agencies had accessed their network. The cameras will remain inactive until the City Council provides further direction.

Other localities have shut off the cameras for good. In January, Los Altos Hills terminated its contract with Flock following concerns about ICE. Santa Cruz severed relations with Flock, citing rising tensions with ICE. Most recently, East Palo Alto and Santa Clara County are reconsidering whether to continue their relationships with Flock, given heightened concern for the safety of immigrant communities.

California law prohibits local police from disclosing ALPR data to out-of-state or federal agencies. But at least 75 California police agencies were sharing these records out-of-state as recently as 2023. Just last year, San Francisco police allowed access to out-of-state agencies and 19 searches were related to ICE.

Even without direct access, ICE can exploit local ALPR systems. One investigation found more than 4,000 cases where police had made searches on behalf of federal law enforcement, including for immigration investigations.

Increasing the risk is that law enforcement routinely searches these networks without first obtaining a warrant. In San Jose, police aren’t required to have any suspicion of wrongdoing before searching ALPR databases, which contain a year’s worth of data representing hundreds of millions of records. In a little over a year, San Jose police logged more than 261,000 ALPR searches, or nearly 700 searches a day, all without a warrant.

Two nonprofit organizations, SIREN and CAIR California, represented by Electronic Frontier Foundation and the ACLU of Northern California, are currently suing to stop San Jose’s warrantless searches of ALPR data. But this is only the first step. A better solution is to simply turn these cameras off.

San Jose cannot afford delay. Each day these cameras remain active, they collect sensitive location data that can be misused to target immigrant families and violate fundamental freedoms. It is a risk materializing across California. City leaders must act now to shut down ALPR systems and make clear that public safety will not come at the expense of privacy, human dignity or community trust.

Related Cases: SIREN and CAIR-CA v. San Jose
Jennifer Pinsof

New Report Helps Journalists Dig Deeper Into Police Surveillance Technology

2 weeks ago
Report from EFF, Center for Just Journalism, and IPVM Helps Cut Through Sales Hype

SAN FRANCISCO — A new report released today offers journalists tips on cutting through the sales hype about police surveillance technology and report accurately on costs, benefits, privacy, and accountability as these invasive and often ineffective tools come to communities across the nation. 

The “Selling Safety” report is a joint project of the Electronic Frontier Foundation (EFF), the Center for Just Journalism (CJJ), and IPVM

Police technology is often sold as a silver bullet: a way to modernize departments, make communities safer, and eliminate human bias from policing with algorithmic objectivity. Behind the slick marketing is a sprawling, under-scrutinized industry that relies on manufacturing the appearance of effectiveness, not measuring it. The cost of blindly deferring to advertising can be high in tax dollars, privacy, and civil liberties. 

“Selling Safety” helps journalists see through the spin. It breaks down how policing technology companies market their tools, and how those sales claims — which are often misleading — get recycled into media coverage. It offers tools for asking better questions, understanding incentives, and finding local accountability stories. 

“The industry that provides technology to law enforcement is one of the most unregulated, unexamined, and consequential in the United States,” said EFF Senior Policy Analyst Matthew Guariglia. “Most Americans would rightfully be horrified to know how many decisions about policing are made: not by public employees, but by multi-billion-dollar surveillance tech companies who have an insatiable profit motive to market their technology as the silver bullet that will stop crime. Lawmakers often are too eager to seem ‘tough on crime’ and journalists too often see an easy story in publishing law enforcement press releases about new technology. This report offers a glimpse into how the police-tech sausage gets made so reporters and lawmakers can recognize the tactics of glossy marketing pitches, manufactured effectiveness numbers, and chumminess between companies and police.” 

“Surveillance and other police technologies are spreading faster than public understanding or oversight, leaving journalists to do critical accountability work in real time. We hope this report helps make that work easier,” said Hannah Riley Fernandez, CJJ’s Director of Programming. 

"The surveillance technology industry has a documented pattern of making unsubstantiated claims about technology,” said Conor Healy, IPVM's Director of Government Research. “Marketing is not a substitute for evidence. Journalists who go beyond press releases to critically examine vendor claims will often find solutions are not as magical as they may seem. In doing so, they perform essential accountability work that protects both taxpayer dollars and civil liberties." 

EFF also maintains resources for understanding various police technologies and mapping those technologies in communities across the United States. 

For the “Selling Safety” report:  https://www.eff.org/document/selling-safety-journalists-guide-covering-police-technology

For EFF’s Street-Level Surveillance hub: https://sls.eff.org/ 

For EFF’s Atlas of Surveillance: https://www.atlasofsurveillance.org/ 

Contact:  BerylLiptonSenior Investigative Researcherberyl@eff.org
Josh Richman

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans

2 weeks 4 days ago

The New York Times reported that Meta is considering adding face recognition technology to its smart glasses. According to an internal Meta document, the company may launch the product “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” 

This is a bad idea that Meta should abandon. If adopted and released to the public, it would violate the privacy rights of millions of people and cost the company billions of dollars in legal battles.   

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination. Adding this technology to glasses on the street also raises safety concerns.  

 This kind of face recognition feature would require the company to collect a faceprint from every person who steps into view of the camera-equipped glasses to find a match. Meta cannot possibly obtain consent from everyone—especially bystanders who are not Meta users.  

Dozens of state laws consider biometric information to be sensitive and require companies to implement strict protections to collect and process it, including affirmative consent.  

Meta Should Know the Privacy and Legal Risks  

Meta should already know the privacy risks of face recognition technology, after abandoning related technology and paying nearly $7 billion in settlements a few years ago.  

In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates. 

Two years before that in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.   

In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law. 

And most recently, in July 2024, Meta agreed to pay $1.4 billion to settle claims that its defunct face recognition system violated Texas law.  

 Privacy Advocates Will Continue to Focus our Resources on Meta  

 Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.  

Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called Mobile Fortify. And Amazon Ring just experienced a huge backlash when people realized that a feature marketed for finding lost dogs could one day be repurposed for mass biometric surveillance.  

The public will continue to resist these privacy invasive features. And EFF, other civil liberties groups, and plaintiffs’ attorneys will be here to help. We urge privacy regulators and attorneys general to step up to investigate as well.  

Mario Trujillo

Discord Voluntarily Pushes Mandatory Age Verification Despite Recent Data Breach

2 weeks 5 days ago

Update February 25, 2026: Discord announced yesterday that it will delay the global rollout of its age verification system to the "second half of 2026", instead of March. The company also said it has announced stricter requirements for partners offering facial age estimation, including that the process must be entirely on-device— Discord said one of its initial partners, Persona, "did not meet that bar."

Discord has begun rolling out mandatory age verification and the internet is, understandably, freaking out.

At EFF, we’ve been raising the alarm about age verification mandates for years. In December, we launched our Age Verification Resource Hub to push back against laws and platform policies that require users to hand over sensitive personal information just to access basic online services. At the time, age gates were largely enforced in polities where it was mandated by law. Now they’re landing in platforms and jurisdictions where they’re not required.

Beginning in early March, users who are either (a) estimated by Discord to be under 18, or (b) Discord doesn't have enough information on, may find themselves locked into a “teen-appropriate experience.” That means content filters, age gates, restrictions on direct messages and friend requests, and the inability to speak in “Stage channels,” which are the large-audience audio spaces that power many community events. Discord says most adults may be sorted automatically through a new “age inference” system that relies on account tenure, device and activity data, and broader platform patterns. Those whose age isn’t estimated due to lack of information or who are estimated to not be adults will be asked to scan their face or upload a government ID through a third-party vendor if they want to avoid the default teen account restrictions.

We’ve written extensively about why age verification mandates are a censorship and surveillance nightmare. Discord’s shift only reinforces those concerns. Here’s why:

The 2025 Breach and What's Changed Since

Discord literally won our 2025 “We Still Told You So” Breachies Award. Last year, attackers accessed roughly 70,000 users’ government IDs, selfies, and other sensitive information after compromising Discord’s third-party customer support system.

To be clear: Discord is no longer using that system, which involved routing ID uploads through its general ticketing system for age verification. It now uses dedicated age verification vendors (k-ID globally and Persona for some users in the United Kingdom).

That’s an improvement. But it doesn’t eliminate the underlying potential for data breaches and other harms. Discord says that it will delete records of any user-uploaded government IDs, and that any facial scans will never leave users’ devices. But platforms are closed-source, audits are limited, and history shows that data (especially this ultra-valuable identity data) will leak—whether through hacks, misconfigurations, or retention mistakes. Users are being asked to simply trust that this time will be different.

Age Verification and Anonymous Speech

For decades, we’ve taught young people a simple rule: don’t share personal information with strangers online.

Age verification complicates that advice. Suddenly, some Discord users will now be asked to submit a government ID or facial scan to access certain features if their age-inference technology fails. Discord has said on its blog that it will not associate a user’s ID with their account (only using that information to confirm their age) and that identifying documents won’t be retained. We take those commitments seriously. However, users have little independent visibility into how those safeguards operate in practice or whether they are sufficient to prevent identification.

Even if Discord can technically separate IDs from accounts, many users are understandably skeptical, especially after the platform’s recent breach involving age-verification data. For people who rely on pseudonymity, being required to upload a face scan or government ID at all can feel like crossing a line.

Many people rely on anonymity to speak freely. LGBTQ+ youth, survivors of abuse, political dissidents, and countless others use aliases to explore identity, find support, and build community safely. When identity checks become a condition of participation, many users will simply opt out. The chilling effect isn’t only about whether an ID is permanently linked to an account; it’s about whether users trust the system enough to participate in the first place. When you’re worried that what you say can be traced back to your government ID, you speak differently—or not at all.

No one should have to choose between accessing online communities and protecting their privacy.

Age Verification Systems Are Not Ready for Prime Time

Discord says it is trying to address privacy concerns by using device-based facial age estimation and separating government IDs from user accounts, retaining only a user’s age rather than their identity documents. This is meant to reduce the risks associated with retaining and collecting this sensitive data. However, even when privacy safeguards are in place, we are faced with another problem: There is no current technology that is fully privacy-protective, universally accessible, and consistently accurate. Facial age estimation tools are notoriously unreliable, particularly for people of color, trans and nonbinary people, and people with disabilities. The internet has now proliferated with stories of people bypassing these facial age estimation tools. But when systems get it wrong, users may be forced into appeals processes or required to submit more documentation, such as government-issued IDs, which would exclude those whose appearance doesn’t match their documents and the millions of people around the world who don’t have government-issued identity documents at all.

Even newer approaches (things like age inference, behavior tracking, financial database checks, digital ID systems) expand the web of data collection, and carry their own tradeoffs around access and error. As we mentioned earlier, no current approach is simultaneously privacy-protective, universally accessible, and consistently accurate across all demographics. 

That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.

That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.

The Aftermath

Discord reports over 200 million monthly active users, and is one of the largest platforms used by gamers to chat. The video game industry is larger than movies, TV, and music combined, and Discord represents an almost-default option for gamers looking to host communities.

Many communities, including open-source projects, sports teams, fandoms, friend groups, and families, use Discord to stay connected. If communities or individuals are wrongly flagged as minors, or asked to complete the age verification process, they may face a difficult choice: submit to facial scans or ID checks, or accept a more restricted “teen” experience. For those who decline to go through the process, the result can mean reduced functionality, limited communication tools, and the chilling effects that follow. 

Most importantly, Discord did not have to “comply in advance” by requiring age verification for all users, whether or not they live in a jurisdiction that mandates it. Other social media platforms and their trade groups have fought back against more than a dozen age verification laws in the U.S., and Reddit has now taken the legal fight internationally. For a platform with as much market power as Discord, voluntarily imposing age verification is unacceptable. 

So You’ve Hit an Age Gate. Now What?

Discord should reconsider whether expanding identity checks is worth the harm to its communities. But in the meantime, many users are facing age checks today.

That’s why we created our guide, “So You’ve Hit an Age Gate. Now What?” It walks through practical steps to minimize risk, such as:

  • Submit the least amount of sensitive data possible.
  • Ask: What data is collected? Who can access it? How long is it retained?
  • Look for evidence of independent, security-focused audits.
  • Be cautious about background details in selfies or ID photos.

There is unfortunately no perfect option, only tradeoffs. And every user will have their own unique set of safety concerns to consider. Amidst this confusion, our goal is to help keep you informed, so you can make the best choices for you and your community.

In light of the harms imposed by age-verification systems, EFF encourages all services to stop adopting these systems when they are not mandated by law. And lawmakers across the world that are considering bills that would make Discord’s approach the norm for every platform should watch this backlash and similarly move away from the idea.

If you care about privacy, free expression, and the right to participate online without handing over your identity, now is the time to speak up.

Join us in the fight.

Rindala Alajaji

🗣 Homeland Security Wants Names | EFFector 38.3

2 weeks 6 days ago

Criticize the government online? The Department of Homeland Security (DHS) might ask Google to cough up your name. By abusing an investigative tool called "administrative subpoenas," DHS has been demanding that tech companies hand over users' names, locations, and more. We're explaining how companies can stand up for users—and covering the latest news in the fight for privacy and free speech online—with our EFFector newsletter.

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks our campaign to expand end-to-end encryption protections, a bill to stop government face scans from Immigration and Customs Enforcement (ICE) and others, and why Section 230 remains the best available system to protect everyone’s ability to speak online.


Prefer to listen in? In our audio companion, EFF Senior Staff Attorney F. Mario Trujillo explains how Homeland Security's lawless subpoenas differ from court orders. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.3 - 🗣 Homeland Security Wants Names

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against unlawful government surveillance when you support EFF today!

Christian Romero

“Free” Surveillance Tech Still Comes at a High and Dangerous Cost

2 weeks 6 days ago

Surveillance technology vendors, federal agencies, and wealthy private donors have long helped provide local law enforcement “free” access to surveillance equipment that bypasses local oversight. The result is predictable: serious accountability gaps and data pipelines to other entities, including Immigration and Customs Enforcement (ICE), that expose millions of people to harm.

The cost of “free” surveillance tools — like automated license plate readers (ALPRs), networked cameras, face recognition, drones, and data aggregation and analysis platforms — is measured not in tax dollars, but in the erosion of civil liberties. 

The cost of “free” surveillance tools is measured not in tax dollars, but in the erosion of civil liberties.

The collection and sharing of our data quietly generates detailed records of people’s movements and associations that can be exposed, hacked, or repurposed without their knowledge or consent. Those records weaken sanctuary and First Amendment protections while facilitating the targeting of vulnerable people.   

Cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance and experimentation with spy tech. 

If these projects are greenlit, oversight is imperative. Mechanisms like public hearings, competitive bidding, public records transparency, and city council supervision aid to ensure these acquisitions include basic safeguards — like use policies, audits, and consequences for misuse — to protect the public from abuse and from creeping contracts that grow into whole suites of products. 

Clear policies and oversight mechanisms must be in place before using any surveillance tools, free or not, and communities and their elected officials must be at the center of every decision about whether to bring these tools in at all.

Here are some of the most common methods “free” surveillance tech makes its way into communities.

Trials and Pilots

Police departments are regularly offered free access to surveillance tools and software through trials and pilot programs that often aren’t accompanied by appropriate use policies. In many jurisdictions, trials do not trigger the same requirements to go before decision-makers outside the police department. This means the public may have no idea that a pilot program for surveillance technology is happening in their city. 

The public may have no idea that a pilot program for surveillance technology is happening in their city.  

In Denver, Colorado, the police department is running trials of possible unmanned aerial vehicles (UAVs) for a drone-as-first-responder (DFR) program from two competing drone vendors: Flock Safety Aerodome drones (through August 2026) and drones from the company Skydio, partnering with Axon, the multi-billion dollar police technology company behind tools like Tasers and AI-generated police reports. Drones create unique issues given their vantage for capturing private property and unsuspecting civilians, as well as their capacity to make other technologies, like ALPRs, airborne. 

Functional, Even Without Funding 

We’ve seen cities decide not to fund a tool, or run out of funding for it, only to have a company continue providing it in the hope that money will turn up. This happened in Fall River, Massachusetts, where the police department decided not to fund ShotSpotter’s $90,000 annual cost and its frequent false alarms, but continued using the system when the company provided free access. 

 Police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later.

In May 2025, Denver's city council unanimously rejected a $666,000 contract extension for Flock Safety ALPR cameras after weeks of public outcry over mass surveillance data sharing with federal immigration enforcement. But Mayor Mike Johnston’s office allowed the cameras to keep running through a “task force” review, effectively extending the program even after the contract was voted down. In response, the Denver Taskforce to Reimagine Policing and Public Safety and Transforming Our Communities Alliance launched a grassroots campaign demanding the city “turn Flock cameras off now,” a reminder that when surveillance starts as a pilot or time‑limited contract, communities often have to fight not just to block renewals but to shut the systems off.

 Importantly, police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later. 

Gifts from Police Foundations and Wealthy Donors

Police foundations and the wealthy have pushed surveillance-driven agendas in their local communities by donating equipment and making large monetary gifts, another means of acquiring these tools without public oversight or buy-in.

In Atlanta, the Atlanta Police Foundation (APF) attempted to use its position as a private entity to circumvent transparency. Following a court challenge from the Atlanta Community Press Collective and Lucy Parsons Labs, a Georgia court determined that the APF must comply with public records laws related to some of its actions and purchases on behalf of law enforcement.
In San Francisco, billionaire Chris Larsen has financially supported a supercharging of the city’s surveillance infrastructure, donating $9.4 million to fund the San Francisco Police Department’s (SFPD) Real-Time Investigation Center, where a menu of surveillance technologies and data come together to surveil the city’s residents. This move comes after the billionaire backed a ballot measure, which passed in March 2025, eroding the city’s surveillance technology law and allowing the SFPD free rein to use new surveillance technologies for a full year without oversight.

Free Tech for Federal Data Pipelines

Federal grants and Department of Homeland Security funding are another way surveillance technology appears free to, only to lock municipalities into long‑term data‑sharing and recurring costs. 

Through the Homeland Security Grant Program, which includes the State Homeland Security Program (SHSP) and the Urban Areas Security (UASI) Initiative, and Department of Justice programs like Byrne JAG, the federal government reimburses states and cities for "homeland security" equipment and software, including including law‑enforcement surveillance tools, analytics platforms, and real‑time crime centers. Grant guidance and vendor marketing materials make clear that these funds can be used for automated license plate readers, integrated video surveillance and analytics systems, and centralized command‑center software—in other words, purchases framed as counterterrorism investments but deployed in everyday policing.

Vendors have learned to design products around this federal money, pitching ALPR networks, camera systems, and analytic platforms as "grant-ready" solutions that can be acquired with little or no upfront local cost. Motorola Solutions, for example, advertises how SHSP and UASI dollars can be used for "law enforcement surveillance equipment" and "video surveillance, warning, and access control" systems. Flock Safety, partnering with Lexipol, a company that writes use policies for law enforcement, offers a "License Plate Readers Grant Assistance Program" that helps police departments identify federal and state grants and tailor their applications to fund ALPR projects. 

Grant assistance programs let police chiefs fast‑track new surveillance: the paperwork is outsourced, the grant eats the upfront cost, and even when there is a formal paper trail, the practical checks from residents, councils, and procurement rules often get watered down or bypassed.

On paper, these systems arrive “for free” through a federal grant; in practice, they lock cities into recurring software, subscription, and data‑hosting fees that quietly turn into permanent budget lines—and a lasting surveillance infrastructure—as soon as police and prosecutors start to rely on them. In Santa Cruz, California, the police department explicitly sought to use a DHS-funded SHSP grant to pay for a new citywide network of Flock ALPR cameras at the city's entrances and exits, with local funds covering additional cameras. In Sumner, Washington, a $50,000 grant was used to cover the entire first year of a Flock system — including installation and maintenance — after which the city is on the hook for roughly $39,000 every year in ongoing fees. The free grant money opens the door, but local governments are left with years of financial, political, and permanent surveillance entanglements they never fully vetted.

The most dangerous cost of this "free" funding is not just budgetary; it is the way it ties local systems into federal data pipelines. Since 9/11, DHS has used these grant streams to build a nationwide network of at least 79–80 state and regional fusion centers that integrate and share data from federal, state, local, tribal, and private partners. Research shows that state fusion centers rely heavily on the DHS Homeland Security Grant Program (especially SHSP and UASI) to "mature their capabilities," with some centers reporting that 100 percent of their annual expenditures are covered by these grants. 

Civil rights investigations have documented how this funding architecture creates a backdoor channel for ICE and other federal agencies to access local surveillance data for their own purposes. A recent report by the Surveillance Technology Oversight Project (S.T.O.P.) describes ICE agents using a Philadelphia‑area fusion center to query the city’s ALPR network to track undocumented drivers in a self‑described sanctuary city.

Ultimately, federal grants follow the same script as trials and foundation gifts: what looks “free” ends up costing communities their data, their sanctuary protections, and their power over how local surveillance is used.

Protecting Yourself Against “Free” Technology

The most important protection against "free" surveillance technology is to reject it outright. Cities do not have to accept federal grants, vendor trials, or philanthropic donations. Saying no to "free" tech is not just a policy choice; it is a political power that local governments possess and can exercise. Communities and their elected officials can and should refuse surveillance systems that arrive through federal grants, vendor pilots, or private donations, regardless of how attractive the initial price tag appears. 

For those cities that have already accepted surveillance technology, the imperative is equally clear: shut it down. When a community has rejected use of a spying tool, the capabilities, equipment, and data collected from that tool should be shut off immediately. Full stop.

And for any surveillance technology that remains in operation, even temporarily, there must be clear rules: when and how equipment is used, how that data is retained and shared, who owns data and how companies can access and use it, transparency requirements, and consequences for any misuse and abuse. 

“Free” surveillance technology is never free. Someone profits or gains power from it. Police technology vendors, federal agencies, and wealthy donors do not offer these systems out of generosity; they offer them because surveillance serves their interests, not ours. That is the real cost of “free” surveillance.

Beryl Lipton

Open Letter to Tech Companies: Protect Your Users From Lawless DHS Subpoenas

3 weeks ago

We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security's (DHS) lawless administrative subpoenas for user data. 

In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE's activities in their community, criticized the government, or attended protests.   

These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision. 

These subpoenas are unlawful, and the government knows it.

But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.  

 That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including: 

  1.  Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;  
  2. Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profile examples of it not happening—ultimately stripping users of their day in court;  
  3. Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena. 

 We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.  

Recipients are not legally compelled to comply with administrative subpoenas absent a court order 

 An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.  

Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance. 

It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.   

DHS is abusing its authority to issue subpoenas 

In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples. 

On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.   

In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.  

In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.  

Read the full letter here

Mario Trujillo

No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare

3 weeks ago

Amazon Ring’s Super Bowl ad offered a vision of our streets that should leave every person unsettled about the company’s goals for disintegrating our privacy in public.

In the ad, disguised as a heartfelt effort to reunite the lost dogs of the country with their innocent owners, the company previewed future surveillance of our streets: a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything — human, pet, and otherwise.

The ad for Ring’s “Search Party” feature highlighted the doorbell camera’s ability to scan footage across Ring devices in a neighborhood, using AI analysis to identify potential canine matches among the many personal devices within the network. 

Amazon Ring already integrates biometric identification, like face recognition, into its products via features like "Familiar Faces,” which depends on scanning the faces of those in sight of the camera and matching it against a list of pre-saved, pre-approved faces. It doesn’t take much to imagine Ring eventually combining these two features: face recognition and neighborhood searches. 

Ring’s “Familiar Faces” feature could already run afoul of biometric privacy laws in some states, which require explicit, informed consent from individuals before a company can just run face recognition on someone. Unfortunately, not all states have similar privacy protections for their residents. 

Ring has a history of privacy violations, enabling surveillance of innocents and protestors, and close collaboration with law enforcement, and EFF has spent years reporting on its many privacy problems.

The cameras, which many people buy and install to identify potential porch pirates or get a look at anyone that might be on their doorstep, feature microphones that have been found to capture audio from the street. In 2023, Ring settled with the Federal Trade Commission over the extensive access it gave employees to personal customer footage. At that time, just three years ago, the FTC wrote: “As a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data for their own purposes.”

The company has made law enforcement access a regular part of its business. As early as 2016, the company was courting police departments through free giveaways. The company provided law enforcement warrantless access to people’s footage, a practice they claimed to cut off in 2024. Not long after, though, the company established partnerships with major police companies Axon and Flock Safety to facilitate the integration of Ring cameras into police intelligence networks. The partnership allows law enforcement to again request Ring footage directly from users. This supplements the already wide-ranging apparatus of data and surveillance feeds now available to law enforcement. 

This feature is turned on by default, meaning that Ring owners need to go into the controls to change it. According to Amazon Ring’s instructions, this is how to disable the “search party” feature: 

  1. Open the Ring app to the main dashboard.
  2. Tap the menu (☰).
  3. Tap Control Center.
  4. Select Search Party.
  5. Tap Disable Search for Lost Pets. Tap the blue Pet icon next to "Search for Lost Pets" to turn the feature off for each camera. (You also have the option to "Disable Natural Hazards (Fire Watch)" and the option to tap the blue Flame icon next to Natural Hazards (Fire Watch) to turn the feature on or off for each camera.)

The addition of AI-driven biometric identification is the latest entry in the company’s history of profiting off of public safety worries and disregard for individual privacy, one that turbocharges the extreme dangers of allowing this to carry on. People need to reject this kind of disingenuous framing and recognize the potential end result: a scary overreach of the surveillance state designed to catch us all in its net.

Beryl Lipton

Coalition Urges California to Revoke Permits for Federal License Plate Reader Surveillance

3 weeks ago
Group led by EFF and Imperial Valley Equity & Justice Asks Gov. Newsom and Caltrans Director to Act Immediately

SAN FRANCISCO – California must revoke permits allowing federal agencies such as Customs and Border Patrol (CBP) and the Drug Enforcement Administration (DEA) to put automated license plate readers along border highways, a coalition led by the Electronic Frontier Foundation (EFF) and Imperial Valley Equity & Justice (IVEJ) demanded today. 

In a letter to Gov. Gavin Newsom and California Department of Transportation (Caltrans) Director Dina El-Tawansy, the coalition notes that this invasive mass surveillance – automated license plate readers (ALPRs) often disguised as traffic barrels – puts both residents and migrants at risk of harassment, abuse, detention, and deportation.  

“With USBP (U.S. Border Patrol) Chief Greg Bovino reported to be returning to El Centro sector, after leading a brutal campaign against immigrants and U.S. citizens alike in Los Angeles, Chicago, and Minneapolis, it is urgent that your administration take action,” the letter says. “Caltrans must revoke any permits issued to USBP. CBP, and DEA for these surveillance devices and effectuate their removal.” 

Coalition members signing the letter include the California Nurses Association; American Federation of Teachers Guild, Local 1931; ACLU California Action; Fight for the Future; Electronic Privacy Information Center; Just Futures Law; Jobs to Move America; Project on Government Oversight; American Friends Service Committee U.S./Mexico Border Program; Survivors of Torture, International; Partnership for the Advancement of New Americans; Border Angels; Southern California Immigration Project; Trust SD Coalition; Alliance San Diego; San Diego Immigrant Rights Consortium; Showing Up for Racial Justice San Diego; San Diego Privacy; Oakland Privacy; Japanese American Citizens League and its Florin-Sacramento Valley, San Francisco, South Bay, Berkeley, Torrance, and Greater Pasadena chapters; Democratic Socialists of America- San Diego; Center for Human Rights and Privacy; The Becoming Project Inc.; Imperial Valley for Palestine; Imperial Liberation Collaborative; Comité de Acción del Valle Inc.; CBFD Indivisible; South Bay People Power; and queercasa

California law prevents state and local agencies from sharing ALPR data with out-of-state agencies, including federal agencies involved in immigration enforcement. However, USBP, CBP, and DEA are bypassing these regulations by installing their own ALPRs. 

EFF researchers have released a map of more than 40 of these covert ALPRs along highways in San Diego and Imperial counties that are believed to belong to federal agencies engaged in immigration enforcement.  In response to a June 2025 public records request, Caltrans has released several documents showing CBP and DEA have applied for permits for ALPRs, with more expected as Caltrans continues to locate records responsive to the request. 

“California must not allow Border Patrol and other federal agencies to use surveillance on our roadways to unleash violence and intimidation on San Diego and Imperial Valley residents,” the letter says. “We ask that your administration investigate and release the relevant permits, revoke them, and initiate the removal of these devices. No further permits for ALPRs or tactical checkpoints should be approved for USBP, CBP, or DEA.” 

"The State of California must not allow Border Patrol to exploit our public roads and bypass state law," said Sergio Ojeda, IVEJ’s Lead Community Organizer for Racial and Economic Justice Programs.  "It's time to stop federal agencies from installing hidden cameras that they use to track, target and harass our communities for travelling between Imperial Valley, San Diego and Yuma." 

For the letter: https://www.eff.org/document/coalition-letter-re-covert-alprs

For the map of the covert ALPRs: https://www.eff.org/covertALPRmap

For high-res images of two of the covert ALPRs: https://www.eff.org/node/111725

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

 

Contact:  DaveMaassDirector of Investigationsdm@eff.org
Josh Richman
Checked
4 hours 14 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed