The Year States Chose Surveillance Over Safety: 2025 in Review

8 hours 41 minutes ago

2025 was the year age verification went from a fringe policy experiment to a sweeping reality across the United States. Half of the U.S. now mandates age verification for accessing adult content or social media platforms. Nine states saw their laws take effect this year alone, with more coming in 2026.

The good news is that courts have blocked many of the laws seeking to impose age-verification gates on social media, largely for the same reasons that EFF opposes these efforts.  Age-verification measures censor the internet and burden access to online speech. Though age-verification mandates are often touted as "online safety" measures for young people, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users' privacy, anonymity, and security.

If you're feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you're not alone. That's why we've launched EFF's Age Verification Resource Hub at EFF.org/Age—a one-stop shop to understand what these laws actually do, what's at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and safe internet. Moreover, there is hope. Although the Supreme Court ruled that imposing age-verification gates to access adult content does not violate the First Amendment on its face, the legal fight continues regarding whether those laws are constitutional. 

As we built the hub throughout 2025, we also fought state mandates in legislatures, courts, and regulatory hearings. Here's a summary of what happened this year.

The Laws That Took Effect (And Immediately Backfired)

Nine states’ age verification laws for accessing adult content went into effect in 2025:

Predictably, users didn’t stop accessing adult content after the laws went into effect, they just changed how they got to it. As we’ve said elsewhere: the internet always routes around censorship. 

In fact, research from the New York Center for Social Media and Politics and the public policy nonprofit the Phoenix Center confirm what we’ve warned from the beginning: age verification laws don’t work. Their research found:

  • Searches for platforms that have blocked access to residents in states with these laws dropped significantly, while searches for offshore sites surged.
  • Researchers saw a predictable surge in VPN usage following the enactment of age verification laws, where for example, Florida saw a 1,150% increase in VPN demand after its law took effect.

As foretold, when platforms block access or require invasive verification, it drives people to sites that operate outside the law—platforms that often pose greater safety risks. Instead of protecting young people, these laws push them toward less secure, less regulated spaces.

Legislation Watch: Expanding Beyond “Adult Content” Lawmakers Take Aim at Social Media Platforms

Earlier this year, we raised the alarm that state legislatures wouldn’t stop at adult content. Sure enough, throughout 2025, lawmakers set their sights on young people’s social media usage, passing laws that require platforms to verify users’ ages and obtain parental consent for accounts belonging to anyone under 18. Four states already passed similar laws in previous years.  These laws were swiftly blocked in courts because they violate the First Amendment and subject every user to surveillance as a condition of participation in online speech. 

Warning Labels and Time Limits

​​And it doesn’t stop with age verification. California and Minnesota passed new laws this year requiring social media platforms to display warning labels to users. Virginia’s SB 854, which also passed this year, took a different approach. It requires social media platforms to use “commercially reasonable efforts” to determine a user's age and, if that user is under 16, limits them to one hour per day per application by default unless a parent changes the time allowance.

EFF is opposed to these laws as they have serious First Amendment concerns. And courts have agreed: in November 2025, the U.S. District Court for the District of Colorado temporarily halted Colorado's warning label law, which would have required platforms to display warnings to users under 18 about the negative impacts of social media. We expect courts to similarly halt California and Minnesota’s laws.

App Store and Device-Level Age Verification

2025 also saw the rise of device-level and app-store age verification laws, which shift the obligation to verify users onto app stores and operating system providers. These laws seriously impact users’ (adults and young people alike) from accessing information, particularly since these laws block a much broader swath of content (not only adult or sexual content), but every bit of content provided by every application. In October, California Governor Gavin Newsom signed the Digital Age Assurance Act (AB 1043), which takes a slightly different approach to age verification in that it requires “operating system providers”—not just app stores—to offer an interface at device/account setup that prompts the account holder to indicate the user’s birth date or age. Developers must request an age signal when applications are downloaded and launched. These laws expand beyond earlier legislation passed in other states that mandate individual websites implement the law, and apply the responsibility to app stores, operating systems, or device makers at a more fundamental level.

Again, these laws have drawn legal challenges. In October, the Computer & Communications Industry Association (CCIA) filed a lawsuit arguing that Texas’s SB 2420 is unconstitutional. A separate suit, Students Engaged in Advancing Texas (SEAT) v. Paxton, challenges the same law on First Amendment grounds, arguing it violates the free speech rights of young people and adults alike. Both lawsuits argue that the burdens placed on platforms, developers, and users outweigh any proposed benefits.

From Legislation to Regulation: Rulemaking Processes Begin

States with existing laws have also begun the process of rulemaking—translating broad statutory language into specific regulatory requirements. These rulemaking processes matter, because the specific technical requirements, data—handling procedures, and enforcement mechanisms will determine just how invasive these laws become in practice. 

California’s Attorney General held a hearing in November to solicit public comment on methods and standards for age assurance under SB 976, the “Protecting Our Kids from Social Media Addiction Act,” which will require age verification by the end of 2026. EFF supported the legal challenge to S.B. 976 since its passage, and federal courts have blocked portions of the law from taking effect. Now in the rulemaking process, EFF submitted comments raising concerns about the discriminatory impacts of any proposed regulations.

New York's Attorney General also released proposed rules for the state’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, describing which companies must comply and the standards for determining users’ age and obtaining parental consent. EFF submitted comments opposing the age verification requirements in September of 2024, and again in December 2025.

Our comments in both states warn that these rules risk entrenching invasive age verification systems and normalizing surveillance as a prerequisite for online participation.

The Boundaries Keep Shifting

As we’ve said, age verification will not stop at adult content and social media. Lawmakers are already proposing bills to require ID checks for everything from skincare products in California to diet supplements in Washington. Lawmakers in Wisconsin and Michigan have set their targets on virtual private networks, or VPNs—proposing various legislation that would ban the use of VPNs to prevent people from bypassing age verification laws. AI chatbots are next on the list, with several states considering legislation that would require age verification for all users. Behind the reasonable-sounding talking points lies a sprawling surveillance regime that would reshape how people of all ages use the internet. EFF remains ready to push back against these efforts in legislatures, regulatory hearings, and court rooms.

2025 showed us that age verification mandates are spreading rapidly, despite clear evidence that they don't work and actively harm the people they claim to protect. 2026 will be the year we push back harder—like the future of a free, open, private, and safe internet depends on it.

This is why we must fight back to protect the internet that we know and love. If you want to learn more about these bills, visit EFF.org/Age

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Rindala Alajaji

Surveillance Self-Defense: 2025 Year in Review

8 hours 41 minutes ago

Our Surveillance Self-Defense (SSD) guides, which provide practical advice and explainers for how to deal with government and corporate surveillance, had a big year. We published several large updates to existing guides and released three all new guides. And with frequent massive protests across the U.S., our guide to attending a protest remained one of the most popular guides of the year, so we made sure our translations were up to date.

(Re)learn All You Need to Know About Encryption

We started this year by taking a deep look at our various encryption guides, which start with the basics before moving up to deeper concepts. We slimmed each guide down and tried to focus on making them as clear and concise as deep explainers on complicated topics can be. We reviewed and edited four guides in total:

And if you’re not sure where to start, we got you covered with the new Interested in Encryption? playlist.

New Guides

We launched three new guides this year, including iPhone and Android privacy guides, which walk you through all the various privacy options of your phone. Both of these guides received a handful of updates throughout their first year as new features were released or, in the case of the iPhone, a new design language was introduced. These also got a fun little boost from a segment on "Last Week Tonight with John Oliver" telling people how to disable their phone’s advertising identifier.

We also launched our How to: Manage Your Digital Footprint guide. This guide is designed to help you claw back some of the data you may find about yourself online, walking through different privacy options across different platforms, digging up old accounts, removing yourself from people search sites, and much more.

Always Be Updating

As is the case with most software, there is always incremental work to do. This year, that meant small updates to our WhatsApp and Signal guides to acknowledge new features (both are already on deck for similar updates early next year as well). 

We overhauled our device encryption guides for Windows, Mac, and Linux, rolling what was once three guides into one, and including more detailed guidance on how to handle recovery keys. Some slight changes to how this works on both Windows and Mac means this one will get another look early next year as well.

Speaking of rolling multiple guides into one, we did the same with our guidance for the Tor browser, where it once lived across three guides, it now lives as one that covers all the major desktop platforms (the mobile guide remains separate).

The password manager guide saw some small changes to note some new features with Apple and Chrome’s managers, as well as some new independent security audits. Likewise, the VPN guide got a light touch to address the TunnelVision security issue.

Finally, the secure deletion guide got a much needed update after years of dormancy. With the proliferation of solid state drives (SSDs, not to be confused with SSD), not much has changed in the secure deletion space, but we did move our guidance for those SSDs to the top of the guide to make it easier to find, while still acknowledging many people around the world still only have access to a computer with spinning disk drives. 

Translations

As always, we worked on translations for these updates. We’re very close to a point where every current SSD guide is updated and translated into Arabic, French, Mandarin, Portuguese, Russian, Spanish, and Turkish.

And with the help of Localization Lab, we also now have translations for a handful of the most important guides in Changana, Mozambican Portuguese, Ndau, Luganda, and Bengali.

Blogs Blogs Blogs

Sometimes we take our SSD-like advice and blog it so we can respond to news events or talk about more niche topics. This year, we blogged about new features, like WhatsApp’s “Advanced Chat Privacy” and Google’s "Advanced Protection.” We also broke down the differences between how different secure chat clients handle backups and pushed for expanding encryption on Android and iPhone.

We fight for more privacy and security every day of every year, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Thorin Klosowski

【焦点】レアアース回収とミサイル発射 南鳥島で国家事業が始動 対中政策の一環=橋詰雅博

23 hours 30 minutes ago
 東京から1900㎞の日本最南端の小笠原諸島・南鳥島が今年2026年、注目される。というのは小さな島内で2つの国家事業が展開されるからだ。一つは内閣府が進める「戦略的イノベーション創造プログラム(SIP)」の一環としてレアアース(希土類)を含む泥の処理施設が設置される。SIPは、探査船を用いて、1月から南鳥島沖合で泥を回収する試験採鉱を行う。レアアース泥を積んだ運搬船は島まで運ぶ。施設では泥の塊からレアアースを分離・精錬。内閣府は27年2月から本格スタートするこうした実証試験..
JCJ

Congress's Crusade to Age Gate the Internet: 2025 in Review

1 day 21 hours ago

In the name of 'protecting kids online,' Congress pushed forward legislation this year that could have severely undermined our privacy and stifled free speech. These bills would have mandated invasive age-verification checks for everyone online—adults and kids alike—handing unprecedented control to tech companies and government authorities.

Lawmakers from both sides of the aisle introduced bill after bill, each one somehow more problematic than the last, and each one a gateway for massive surveillance, internet censorship, and government overreach. In all, Congress considered nearly twenty federal proposals.

For us, this meant a year of playing legislative whack-a-mole, fighting off one bad bill after another. But more importantly, it meant building sustained opposition, strengthening coalitions, and empowering our supporters—that's you!—with the tools you need to understand what's at stake and take action.

Luckily, thanks to this strong opposition, these federal efforts all stalled… for now.

So, before we hang our hats and prepare for the new year, let’s review some of our major wins against federal age-verification legislation in 2025.

The Kids Online Safety Act (KOSA)

Of the dozens of federal proposals relating to kids online, the Kids Online Safety Act remains the biggest threat. We, along with a coalition of civil liberties groups, LGBTQ+ advocates, youth organizations, human rights advocates, and privacy experts, have been sounding the alarm on KOSA for years now.

First introduced in 2022, KOSA would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to certain content. There have been numerous versions introduced, though all of them share a common core: KOSA is an unconstitutional censorship bill that threatens the speech and privacy rights of all internet users. It would impose a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” Those prohibitions are so broad that they will sweep up online speech about the topics, including efforts to provide resources to adults and minors experiencing them. The bill claims prohibit censorship based on “the viewpoint of users,” but that’s simply a smokescreen. Its core function is to let the federal government sue platforms, big or small, that don’t block or restrict content that someone later claims contributed to one of these harms. 

In addition to stifling online speech, KOSA would strongly incentivize age-verification systems—forcing all users, adults and minors, to prove who they are before they can speak or read online. Because KOSA requires online services to separate and censor aspects of their services accessed by children, services are highly likely to demand to know every user’s age to avoid showing minors any of the content KOSA deems harmful. There are a variety of age determination options, but all have serious privacy, accuracy, or security problems. Even worse, age-verification schemes lead everyone to provide even more personal data to the very online services that have invaded our privacy before. And all age verification systems, at their core, burden the rights of adults to read, get information, and speak and browse online anonymously.

Despite what lawmakers claim, KOSA won’t bother big tech—in fact, they endorse it! The bill is written so that big tech companies, like Apple and X, will be able to handle the regulatory burden that KOSA will demand, while smaller platforms will struggle to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

The good news is that KOSA’s momentum this Congress was waning at best. There was a lot of talk about the bill from lawmakers, but little action. The Senate version of the bill, which passed overwhelmingly last summer, did not even make it out of committee this Congress.

In the House, lawmakers could not get on the same page about the bill—so much so that one of the original sponsors of KOSA actually voted against the bill in committee in December.

The bad news is that lawmakers are determined to keep raising this issue, as soon as the beginning of next year. So let’s keep the momentum going by showing them that users do not want age verification mandates—we want privacy.

TAKE ACTION

Don't let congress censor the internet

Threats Beyond KOSA

KOSA wasn’t the only federal bill in 2025 that used “kids’ safety” as a cover for sweeping surveillance and censorship mandates. Concern about possible harms of AI chatbots dominated policy discussion this year in Congress.

One of the most alarming proposals on the issue was the GUARD Act, which would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. As we wrote in November, though the GUARD Act may look like a child-safety bill, in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using some of the digital tools that they rely on every day.

Like KOSA, the GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further fracturing the internet we know and love.

With your help, we urged lawmakers to reject the GUARD Act and focus instead on policies that provide more transparency, options, and comprehensive privacy for all users.

Beating Age Verification for Good

Together, these bills reveal a troubling pattern in Congress this year. Rather than actually protecting young people’s privacy and safety online, Congress continues to push a legislative framework that’s based on some deeply flawed assumptions:

  1. That the internet must be age-gated, with young people either heavily monitored or kicked off entirely, in order to be safe;
  2. That the value of our expressive content to each individual should be determined by the state, not individuals or even families; and
  3. That these censorship and surveillance regimes are worth the loss of all users’ privacy, anonymity, and free expression online.

We’ve written over and over about the many communities who are immeasurably harmed by online age verification mandates. It is also worth remembering who these bills serve—big tech companies, private age verification vendors, AI companies, and legislators vying for the credit of “solving” online safety while undermining users at every turn.

We fought these bills all through 2025, and we’ll continue to do so until we beat age verification for good. So rest up, read up (starting with our all-new resource hub, EFF.org/Age!), and get ready to join us in this fight in 2026. Thank you for your support this year.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Molly Buckley

States Tried to Censor Kids Online. Courts, and EFF, Mostly Stopped Them: 2025 in Review

1 day 22 hours ago

Lawmakers in at least a dozen states believe that they can pass laws blocking  young people from social media or require them to get their parents’ permission before logging on. Fortunately, nearly every trial court to review these laws has ruled that they are unconstitutional.

It’s not just courts telling these lawmakers they are wrong. EFF has spent the past year filing friend-of-the-court briefs in courts across the country explaining how these laws violate young people’s First Amendment rights to speak and get information online. In the process, these laws also burden adults’ rights, and jeopardize everyone’s privacy and data security.

Minors have long had the same First Amendment rights as adults: to talk about politics, create art, comment on the news, discuss or practice religion, and more. The internet simply amplified their ability to speak, organize, and find community.

Although these state laws vary in scope, most have two core features. First, they require social media services to estimate or verify the ages of all users. Second, they either ban minor access to social media, or require parental permission. 

In 2025, EFF filed briefs challenging age-gating laws in California (twice), Florida, Georgia, Mississippi, Ohio, Utah, Texas, and Tennessee. Across these cases we argued the same point: these laws burden the First Amendment rights of both young people and adults. In many of these briefs, the ACLU, Center for Democracy & Technology, Freedom to Read Foundation, LGBT Technology Institute, TechFreedom, and Woodhull Freedom Foundation joined.

There is no “kid exception” to the First Amendment. The Supreme Court has repeatedly struck down laws that restrict minors’ speech or impose parental-permission requirements. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks. As EFF has urged, lawmakers should pursue strong privacy laws, not censorship, to address online harms.

These laws also burden everyone’s speech requiring users to prove their age. ID-based systems of access can lock people out if they don’t have the right form of ID, and biometric systems are often discriminatory or inaccurate. Requiring users to identify themselves before speaking also chills anonymous speech—protected by the First Amendment, and essential for those who risk retaliation. 

Finally, requiring users to provide sensitive personal information increases their risk of future privacy and security invasions. Most of these laws perversely require social media companies to collect even more personal information from everyone, especially children, who can be more vulnerable to identify theft.

EFF will continue to fight for the rights of minors and adults to access the internet, speak freely, and organize online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Aaron Mackey

Site Blocking Laws Will Always Be a Bad Idea: 2025 in Review

2 days 17 hours ago

This year, we fought back against the return of a terrible idea that hasn’t improved with age: site blocking laws. 

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved.  

But we’ve never believed they were gone for good. The major media and entertainment companies that backed site blocking in the US in 2012 turned to pushing for site-blocking laws in other countries. Rightsholders continued to ask US courts for site-blocking orders, often winning them without a new law. And sure enough, the Motion Picture Association (MPA) and its allies have asked Congress to try again. 

There were no less than three Congressional drafts of site-blocking legislation. Representative Zoe Lofgren kicked off the year with the Foreign Anti-Digital Piracy Act (FADPA). Fellow House of Representatives member Darrell Issa also claimed to be working on a bill that would make it offensively easy for a studio to block your access to a website based solely on the belief that there is infringement happening. Not to be left out, the Senate Judiciary Committee produced the terribly named Block BEARD Act.  

None of these three attempts to fundamentally alter the way you experience the internet moved too far after their press releases. But the number tells us that there is, once again, an appetite among major media conglomerates and politicians to resurrect SOPA/PIPA from the dead.  

None of these proposals fixes the flaws of SOPA/PIPA, and none ever could. Site blocking is a flawed idea and a disaster for free expression that no amount of rewriting will fix. There is no way to create a fast lane for removing your access to a website that is not a major threat to the open web. Just as we opposed SOPA/PIPA over ten years ago, we oppose these efforts.  

Site blocking bills seek to build a new infrastructure of censorship into the heart of the internet. They would enable court orders directed to the organizations that make the internet work, like internet service providers, domain name resolvers, and reverse proxy services, compelling them to help block US internet users from visiting websites accused of copyright infringement. The technical means haven’t changed much since 2012. - tThey involve blocking Internet Protocol addresses or domain names of websites. These methods are blunt—sledgehammers rather than scalpels. Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in Austria, Italy, South Korea, France, and in the US, to name just a few.  

Given this downside, one would think the benefits of copyright enforcement from these bills ought to be significant. But site blocking is trivially easy to evade. Determined site owners can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online.  

The limits that lawmakers have proposed to put on these laws are an illusion. While ostensibly aimed at “foreign” websites, they sweep in any website that doesn’t conspicuously display a US origin, putting anonymity at risk. And despite the rhetoric of MPA and others that new laws would be used only by responsible companies against the largest criminal syndicates, laws don’t work that way. Massive new censorship powers invite abuse by opportunists large and small, and the costs to the economy, security, and free expression are widely borne. 

It’s time for Big Media and its friends in Congress to drop this flawed idea. But as long as they keep bringing it up, we’ll keep on rallying internet users of all stripes to fight it. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Mitch Stoltz

EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in Review

2 days 20 hours ago

Throughout 2025, EFF conducted groundbreaking investigations into Flock Safety's automated license plate reader (ALPR) network, revealing a system designed to enable mass surveillance and susceptible to grave abuses. Our research sparked state and federal investigations, drove landmark litigation, and exposed dangerous expansion into always-listening voice detection technology. We documented how Flock's surveillance infrastructure allowed law enforcement to track protesters exercising their First Amendment rights, target Romani people with discriminatory searches, and surveil women seeking reproductive healthcare.

Flock Enables Surveillance of Protesters

When we obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025, the patterns were unmistakable. Agencies logged hundreds of searches related to political demonstrations—the 50501 protests in February, Hands Off protests in April, and No Kings protests in June and October. Nineteen agencies conducted dozens of searches specifically tied to No Kings protests alone. Sometimes searches explicitly referenced protest activity; other times, agencies used vague terminology to obscure surveillance of constitutionally protected speech.

The surveillance extended beyond mass demonstrations. Three agencies used Flock's system to target activists from Direct Action Everywhere, an animal-rights organization using civil disobedience to expose factory farm conditions. Delaware State Police queried the Flock network nine times in March 2025 related to Direct Action Everywhere actions—showing how ALPR surveillance targets groups engaged in activism challenging powerful industries.

Biased Policing and Discriminatory Searches

Our November analysis revealed deeply troubling patterns: more than 80 law enforcement agencies used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety ALPR network. Between June 2024 and October 2025, police performed hundreds of searches using terms such as "roma" and racial slurs—often without mentioning any suspected crime.

Audit logs revealed searches including "roma traveler," "possible g*psy," and "g*psy ruse." Grand Prairie Police Department in Texas searched for the slur six times while using Flock's "Convoy" feature, which identifies vehicles traveling together—essentially targeting an entire traveling community without specifying any crime. According to a 2020 Harvard University survey, four out of 10 Romani Americans reported being subjected to racial profiling by police. Flock's system makes such discrimination faster and easier to execute at scale.

Weaponizing Surveillance Against Reproductive Rights

In October, we obtained documents showing that Texas deputies queried Flock Safety's surveillance data in what police characterized as a missing person investigation, but was actually an abortion case. Deputies initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman's self-managed abortion, and consulted prosecutors about possible charges.

A Johnson County official ran two searches with the note "had an abortion, search for female." The second search probed 6,809 networks, accessing 83,345 cameras across nearly the entire country. This case revealed Flock's fundamental danger: a single query accesses more than 83,000 cameras spanning almost the entire nation, with minimal oversight and maximum potential for abuse—particularly when weaponized against people seeking reproductive healthcare.

Feature Updates Miss the Point

In June, EFF explained why Flock Safety's announced feature updates cannot make ALPRs safe. The company promised privacy-enhancing features like geofencing and retention limits in response to public pressure. But these tweaks don't address the core problem: Flock's business model depends on building a nationwide, interconnected surveillance network that creates risks no software update can eliminate. Our 2025 investigations proved that abuses stem from the architecture itself, not just how individual agencies use the technology.

Accountability and Community Action

EFF's work sparked significant accountability measures. U.S. Rep. Raja Krishnamoorthi and Rep. Robert Garcia launched a formal investigation into Flock's role in "enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans."

Illinois Secretary of State Alexi Giannoulias launched an audit after EFF research showed Flock allowed U.S. Customs and Border Protection to access Illinois data in violation of state privacy laws. In November, EFF partnered with the ACLU of Northern California to file a lawsuit against San Jose and its police department, challenging warrantless searches of millions of ALPR records. Between June 5, 2024 and June 17, 2025, SJPD and other California law enforcement agencies searched San Jose's database 3,965,519 times—a staggering figure illustrating the vast scope of warrantless surveillance enabled by Flock's infrastructure.

Our investigations also fueled municipal resistance to Flock Safety. Communities from Austin to Evanston to Eugene successfully canceled or refused to renew their Flock contracts after organizing campaigns centered on our research documenting discriminatory policing, immigration enforcement, threats to reproductive rights, and chilling effects on protest. These victories demonstrate that communities—armed with evidence of Flock's harms—can challenge and reject surveillance infrastructure that threatens civil liberties.

Dangerous New Capabilities: Always-Listening Microphones

In October 2025, Flock announced plans to expand its gunshot detection microphones to listen for "human distress" including screaming. This dangerous expansion transforms audio sensors into powerful surveillance tools monitoring human voices on city streets. High-powered microphones above densely populated areas raise serious questions about wiretapping laws, false alerts, and potential for dangerous police responses to non-emergencies. After EFF exposed this feature, Flock quietly amended its marketing materials to remove explicit references to "screaming"—replacing them with vaguer language about "distress" detection—while continuing to develop and deploy the technology.

Looking Forward

Flock Safety's surveillance infrastructure is not a neutral public safety tool. It's a system that enables and amplifies racist policing, threatens reproductive rights, and chills constitutionally protected speech. Our 2025 investigations proved it beyond doubt. As we head into 2026, EFF will continue exposing these abuses, supporting communities fighting back, and litigating for the constitutional protections that surveillance technology has stripped away.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Sarah Hamid

Fighting Renewed Attempts to Make ISPs Copyright Cops: 2025 in Review

2 days 21 hours ago

You might not know it, given the many headlines focused on new questions about copyright and Generative AI, but the year’s biggest copyright case concerned an old-for-the-internet question: do ISPs have to be copyright cops? After years of litigation, that question is now squarely before the Supreme Court. And if the Supreme Court doesn’t reverse a lower court’s ruling, ISPs could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life.

The Stakes: Turning ISPs into Copyright Police

This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service. The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). Under that standard, providing a service that could be used for infringement is enough to create liability when a customer infringes.

The Fourth Circuit’s rule would have devastating consequences for the public. Given copyright law’s draconian penalties, ISP would be under enormous pressure to terminate accounts whenever they get an infringement notice, whether or not the actual accountholder has infringed anything: entire households, schools, libraries, or businesses that share an internet connection. These would include:

  • Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.
  • Universities, hospitals, and local governments could see internet access for whole communities disrupted.
  • Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.

And with more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect.

EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an amicus brief urging the Court to reverse the Fourth Circuit’s decision, taking guidance from patent law. In the Patent Act, where Congress has explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is incapable of substantial non-infringing use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework.

The Supreme Court held a hearing in the case on December 1, and a majority of the justices seemed troubled by the implications of the Fourth Circuit’s ruling. One exchange was particularly telling: asked what should happen when the notices of infringement target a university account upon which thousands of people rely, Sony’s counsel suggested the university could resolve the issue by essentially slowing internet speeds so infringement might be less appealing. It’s hard to imagine the university community would agree that research, teaching, artmaking, library services, and the myriad other activities that rely on internet access should be throttled because of the actions of a few students. Hopefully the Supreme Court won’t either.

We expect a ruling in the case in the next few months. Fingers crossed that the Court rejects the Fourth Circuit’s draconian rule.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Corynne McSherry

【反戦】戦争への道阻む!全国で叫び訴え 被害者にも加害者にもなりたくない

2 days 22 hours ago
 9月25日にレゾリユートドランゴンが終わったら、10月20日から31日まに過去最大規模で「自衛隊統合演習(実働演習)」が行われた。陸海空3自衛隊と米豪2カ国の共同作戦演習だ。参加人員は自衛隊が約5万2300人、車両4180台、艦船約60隻、航空機約310機。米軍は人員約5900人。豪軍は人員約230人。陸海空のほか、宇宙・サイバーや電磁波演習もあり、一部には土日を含む夜間演習が行われた。民間空港・漁港の利用も拡大され、北海道から沖縄まで8空港と31港湾を使用。こうした訓練に..
JCJ

Operations Security (OPSEC) Trainings: 2025 in Review

3 days 22 hours ago

It's no secret that digital surveillance and other tech-enabled oppressions are acute dangers for liberation movement workers. The rising tides of tech-fueled authoritarianism and hyper-surveillance are universal themes across the various threat models we consider. EFF's Surveillance Self-Defense project is a vital antidote to these threats, but it's not all we do to help others address these concerns. Our team often receives questions, requests for security trainings, presentations on our research, and asks for general OPSEC (operations security, or, the process of applying digital privacy and information security strategies to a current workflow or process) advising. This year stood out for the sheer number and urgency of requests we fielded. 

Combining efforts across our Public Interest Technology and Activism teams, we consulted with an estimated 66 groups and organizations, with at least 2000 participants attending those sessions. These engagements typically look like OPSEC advising and training, usually merging aspects of threat modeling, cybersecurity 101, secure communications practices, doxxing self-defense, and more. The groups we work with are often focused on issue-spaces that are particularly embattled at the current moment, such as abortion access, advocacy for transgender rights, and climate justice. 

Our ability to offer realistic and community-focused OPSEC advice for these liberation movement workers is something we take great pride in. These groups are often under-resourced and unable to afford typical infosec consulting. Even if they could, traditional information security firms are designed to protect corporate infrastructure, not grassroots activism. Offering this assistance also allows us to stress-test the advice given in the aforementioned Surveillance Self-Defense project with real-world experience and update it when necessary. What we learn from these sessions also informs our blog posts, such as this piece on strategies for overcoming tech-enabled violence for transgender people, and this one surveying the landscape of digital threats in the abortion access movement post-Roe

There is still much to be done. Maintaining effective privacy and security within one's work is an ongoing process. We are grateful to be included in the OPSEC process planning for so many other human-rights defenders and activists, and we look forward to continuing this work in the coming years. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Daly Barnett

EFF in the Press: 2025 in Review

3 days 22 hours ago

EFF’s attorneys, activists, and technologists don’t just do the hard, endless work of defending our digital civil liberties — they also spend a lot of time and effort explaining that work to the public via media interviews. 

EFF had thousands of media mentions in 2025, from the smallest hyperlocal outlets to international news behemoths. Our work on street-level surveillance — the technology that police use to spy on our communities — generated a great deal of press attention, particularly regarding automated license plate readers (ALPRs). But we also got a lot of ink and airtime for our three lawsuits against the federal government: one challenging the U.S. Office of Personnel Management's illegal data sharing, a second challenging the State Department's unconstitutional "catch and revoke" program, and the third demanding that the departments of State and Justice reveal what pressure they put on app stores to remove ICE-tracking apps.

Other hot media topics included how travelers can protect themselves against searches of their devices, how protestors can protect themselves from surveillance, and the misguided age-verification laws that are proliferating across the nation and around the world, which are an attack on privacy and free expression.

On national television, Matthew Guariglia spoke with NBC Nightly News to discuss how more and more police agencies are using private doorbell cameras to surveil neighborhoods. Tori Noble spoke with ABC’s Good Morning America about the dangers of digital price tags, as well as with ABC News Live Prime about privacy concerns over OpenAI’s new web browser.

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FUrFD-JVHmp4%3Fsi%3DuaW-nxW8o3jt5WJU%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F1hEgPLRmgxo%3Fsi%3DwQsxQIqqjUnSI9qm%26autoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20frameborder%3D%220%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%3B%20web-share%22%20referrerpolicy%3D%22strict-origin-when-cross-origin%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

 

In a sampling of mainstream national media, EFF was cited 33 times by the Washington Post, 16 times by CNN, 13 times by USA Today, 12 times by the Associated Press, 11 times by NBC News, 11 times by the New York Times, 10 times by Reuters, and eight times by National Public Radio. Among tech and legal media, EFF was cited 74 times by Privacy Daily, 35 times by The Verge, 32 times by 404 Media, 32 times by The Register, 26 times by Ars Technica, 25 times by WIRED, 21 times by Law360, 21 times by TechCrunch, 20 times by Gizmodo, and 14 times by Bloomberg Law.

Abroad, EFF was cited in coverage by media outlets in nations including Australia, Bangladesh, Belgium, Canada, Colombia, El Salvador, France, Germany, India, Ireland, New Zealand, Palestine, the Philippines, Slovakia, South Africa, Spain, Trinidad and Tobago, the United Arab Emirates, and the United Kingdom. 

EFF staffers spoke to the masses in their own words via op-eds such as: 

And we ruled the airwaves on podcasts including: 

We're grateful to all the intrepid journalists who keep doing the hard work of reporting accurately on tech and privacy policy, and we encourage them to keep reaching out to us at press@eff.org.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Josh Richman