Wave of Phony News Quotes Affects Everyone—Including EFF

1 week 3 days ago

Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims. 

WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI Training,” containing this passage:  winbuzzer_june_26.png

That quotation from EFF’s Corynne McSherry was cited again in two subsequent, related stories by the same journalist—one published July 27, the other August 27

But the link in that original June 26 post was fake. Corynne McSherry never wrote such an article, and the quote was bogus. 

Interestingly, we noted a similar issue with a June 13 post by the same journalist, in which he cited work by EFF Director of Cybersecurity Eva Galperin; this quote included the phrase “get-out-of-jail-free card” too. 

winbuzzer_june_13.png

Again, the link he inserted leads nowhere because Eva Galperin never wrote such a blog or white paper.  

When EFF reached out, the journalist—WinBuzzer founder and editor-in-chief Markus Kasanmascheff—acknowledged via email that the quotes were bogus. 

“This indeed must be a case of AI slop. We are using AI tools for research/source analysis/citations. I sincerely apologize for that and this is not the content quality we are aiming for,” he wrote. “I myself have noticed that in the particular case of the EFF for whatever reason non-existing quotes are manufactured. This usually does not happen and I have taken the necessary measures to avoid this in the future. Every single citation and source mention must always be double checked. I have been doing this already but obviously not to the required level. 

“I am actually manually editing each article and using AI for some helping tasks. I must have relied too much on it,” he added. 

AI slop abounds 

It’s not an isolated incident. Media companies large and small are using AI to generate news content because it’s cheaper than paying for journalists’ salaries, but that savings can come at the cost of the outlets’ reputations.  

The U.K.’s Press Gazette reported last month that Wired and Business Insider had to remove news features written by one freelance journalist after concerns the articles are likely AI-generated works of fiction: “Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.” 

And back in May, the Chicago Sun-Times had to apologize after publishing an AI-generated list of books that would make good summer reads—with 10 of the 15 recommended book descriptions and titles found to be “false, or invented out of whole cloth.” 

As journalist Peter Sterne wrote for Nieman Lab in 2022: 

Another potential risk of relying on large language models to write news articles is the potential for the AI to insert fake quotes. Since the AI is not bound by the same ethical standards as a human journalist, it may include quotes from sources that do not actually exist, or even attribute fake quotes to real people. This could lead to false or misleading reporting, which could damage the credibility of the news organization. It will be important for journalists and newsrooms to carefully fact check any articles written with the help of AI, to ensure the accuracy and integrity of their reporting. 

(Or did he write that? Sterne disclosed in that article that he used OpenAI’s ChatGPT-3 to generate that paragraph, ironically enough.) 

The Radio Television Digital News Association issued guidelines a few years ago for the use of AI in journalism, and the Associated Press is among many outlets that have developed guidelines of their own. The Poynter Institute offers a template for developing such policies.  

Nonetheless, some journalists or media outlets have been caught using AI to generate stories including fake quotes; for example, the Associated Press reported last year that a Wyoming newspaper reporter had filed at least seven stories that included AI-generated quotations from six people.  

WinBuzzer wasn’t the only outlet to falsely quote EFF this year. An April 19 article in Wander contained another bogus quotation from Eva Galperin: 

April 19 Wander clipping with fake quote from Eva Galperin

An email to the outlet demanding the article’s retraction went unanswered. 

In another case, WebProNews published a July 24 article quoting Eva Galperin under the headline “Risika Data Breach Exposes 100M Swedish Records to Fraud Risks,” but Eva confirmed she’d never spoken with them or given that quotation to anyone. The article no longer seems to exist on the outlet’s own website, but it was captured by the Internet Archive’s Wayback Machine

07-24-2025_webpronews_screenshot.png  

A request for comment made through WebProNews’ “Contact Us” page went unanswered, and then they did it again on September 2, this time misattributing a statement to Corynne McSherry: 

09-02-2025_webpronews_corynne_mcsherry.png
No such article in The Verge seems to exist, and the statement is not at all in line with EFF’s stance. 

Our most egregious example 

The top prize for audacious falsity goes to a June 18 article in the Arabian Post, since removed from the site after we flagged it to an editor. The Arabian Post is part of the Hyphen Digital Network, which describes itself as “at the forefront of AI innovation” and offering “software solutions that streamline workflows to focus on what matters most: insightful storytelling.” The article in question included this passage: 

Privacy advocate Linh Nguyen from the Electronic Frontier Foundation remarked that community monitoring tools are playing a civic role, though she warned of the potential for misinformation. “Crowdsourced neighbourhood policing walks a thin line—useful in forcing transparency, but also vulnerable to misidentification and fear-mongering,” she noted in a discussion on digital civil rights. 

muck_rack_june_19_-_arabian_post.png

Nobody at EFF recalls anyone named Linh Nguyen ever having worked here, nor have we been able to find anyone by that name who works in the digital privacy sector. So not only was the quotation fake, but apparently the purported source was, too.  

Now, EFF is all about having our words spread far and wide. Per our copyright policy, any and all original material on the EFF website may be freely distributed at will under the Creative Commons Attribution 4.0 International License (CC-BY), unless otherwise noted. 

But we don't want AI and/or disreputable media outlets making up words for us. False quotations that misstate our positions damage the trust that the public and more reputable media outlets have in us. 

If you're worried about this (and rightfully so), the best thing a news consumer can do is invest a little time and energy to learn how to discern the real from the fake. It’s unfortunate that it's the public’s burden to put in this much effort, but while we're adjusting to new tools and a new normal, a little effort now can go a long way.  

As we’ve noted before in the context of election misinformation, the nonprofit journalism organization ProPublica has published a handy guide about how to tell if what you’re reading is accurate or “fake news.” And the International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends: 

how_to_spot_fake_news.jpg

Josh Richman

Decoding Meta's Advertising Policies for Abortion Content

1 week 4 days ago

This is the seventh installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

For users hoping to promote or boost an abortion-related post on Meta platforms, the Community Standards are just step one. While the Community Standards apply to all posts, paid posts and advertisements must also comply with Meta's Advertising Standards. It’s easy to understand why Meta places extra requirements on paid content. In fact, their “advertising policy principles” outline several important and laudable goals, including promoting transparency and protecting users from scams, fraud, and unsafe and discriminatory practices. 

But additional standards bring additional content moderation, and with that comes increased potential for user confusion and moderation errors. Meta’s ad policies, like its enforcement policies, are vague on a number of important questions. Because of this, it’s no surprise that Meta's ad policies repeatedly came up as we reviewed our Stop Censoring Abortion submissions. 

There are two important things to understand about these ad policies. First, the ad policies do indeed impose stricter rules on content about abortion—and specifically medication abortion—than Meta’s Community Standards do. To help users better understand what is and isn’t allowed, we took a closer look at the policies and what Meta has said about them. 

Second, despite these requirements, the ad policies do not categorically block abortion-related posts from being promoted as ads. In other words, while Meta’s ad policies introduce extra hurdles, they should not, in theory, be a complete barrier to promoting abortion-related posts as boosted content. Still, our analysis revealed that Meta is falling short in several areas. 

What’s Allowed Under the Drugs and Pharmaceuticals Policy? 

When EFF asked Meta about potential ad policy violations, the company first pointed to its Drugs and Pharmaceuticals policy. In the abortion care context, this policy applies to paid content specifically about medication abortion and use of abortion pills. Ads promoting these and other prescription drugs are permitted, but there are additional requirements: 

  • To reduce risks to consumers, Meta requires advertisers to prove they’re appropriately licensed and get prior authorization from Meta.  
  • Authorization is limited to online pharmacies, telehealth providers, and pharmaceutical manufacturers.  
  • The ads also must only target people 18 and older, and only in the countries in which the user is licensed.  

Understanding what counts as “promoting prescription drugs” is where things get murky. Crucially, the written policy states that advertisers do not need authorization to run ads that “educate, advocate or give public service announcements related to prescription drugs” or that “promote telehealth services generally.” This should, in theory, leave a critical opening for abortion advocates focused on education and advocacy rather than direct prescription drug sales. 

But Meta told EFF that advertisers “must obtain authorization to post ads discussing medical efficacy, legality, accessibility, affordability, and scientific merits and restrict these ads to adults aged 18 or older.” Yet many of these topics—medical efficacy, legality, accessibility—are precisely what educational content and advocacy often address. Where’s the line? This vagueness makes it difficult for abortion pill advocates to understand what’s actually permitted. 

What’s Allowed Under the Social Issues Policy?  

Meta also told EFF that its Ads about Social Issues, Elections or Politics policy may apply to a range of abortion-related content. Under this policy, advertisers within certain countries—including the U.S.—must meet several requirements before running ads about certain “social issues.” Requirements include: 

  • Completing Meta’s social issues authorization process
  • Including a verified "Paid for by" disclaimer on the ad; and 
  • Complying with all applicable laws and regulations. 

While certain news publishers are exempt from the policy, it otherwise applies to a wide range of accounts, including activists, brands, non-profit groups and political organizations. 

Meta defines “social issues” as “sensitive topics that are heavily debated, may influence the outcome of an election or result in/relate to existing or proposed legislation.” What falls under this definition differs by country, and Meta provides country-specific topics lists and examples. In the U.S. and several other countries, ads that include “discussion, debate, or advocacy for or against...abortion services and pro-choice/pro-life advocacy” qualify as social issues ads under the “Civil and Social Rights” category.

Confusingly, Meta differentiates this from ads that primarily sell a product or promote a service, which do not require authorization or disclaimers, even if the ad secondarily includes advocacy for an issue. For instance, according to Meta's examples, an ad that says, “How can we address systemic racism?” counts as a social issues ad and requires authorization and disclaimers. On the other hand, an ad that says, “We have over 100 newly-published books about systemic racism and Black History now on sale” primarily promotes a product, and would not require authorization and disclaimers. But even with Meta's examples, the line is still blurry. This vagueness invites confusion and content moderation errors.

What About the Health and Wellness Policy? 

Oddly, Meta never specifically identified its Health and Wellness ad policy to EFF, though the policy is directly relevant to abortion-related paid content. This policy addresses ads about reproductive health and family planning services, and requires ads regarding “abortion medical consultation and related services” to be targeted at users 18 and older. It also expressly states that for paid content involving “[r]eproductive health and wellness drugs or treatments that require prescription,” accounts must comply with both this policy and the Drugs and Pharmaceuticals policy. 

This means abortion advocates must navigate the Drugs and Pharmaceuticals policy, the Social Issues policy, and the Health and Wellness policy—each with its own requirements and authorization processes. That Meta didn’t mention this highly relevant policy when asked about abortion advertising underscores how confusingly dispersed these rules are. 

Like the Drugs policy, the Health and Wellness policy contains an important education exception for abortion advocates: The age-targeting requirements do not apply to “[e]ducational material or information about family planning services without any direct promotion or facilitation of the services.”  

When Content Moderation Makes Mistakes 

Meta's complex policies create fertile ground for automated moderation errors. Our Stop Censoring Abortion survey submissions revealed that Meta's systems repeatedly misidentified educational abortion content as Community Standards violations. The same over-moderation problems are also a risk in the advertising context.  

On top of that, content moderation errors even on unpaid posts can trigger advertising restrictions and penalties. Meta's advertising restrictions policy states that Community Standards violations can result in restricted advertising features or complete advertising bans. This creates a compounding problem when educational content about abortion is wrongly flagged. Abortion advocates could face a double penalty: first their content is removed, then their ability to advertise is restricted. 

This may be, in part, what happened to Red River Women's Clinic, a Minnesota abortion clinic we wrote about earlier in this series. When its account was incorrectly suspended for violating the “Community Standards on drugs,” the clinic appealed and eventually reached out to a contact at Meta. When Meta finally removed the incorrect flag and restored the account, Red River received a message informing them they were no longer out of compliance with the advertising restrictions policy. 

Screenshot submitted by Red River Women's Clinic to EFF

How Meta Can Improve 

Our review of the ad policies and survey submissions showed that there is room for improvement in how Meta handles abortion-related advertising. 

First, Meta should clarify what is permitted without prior authorization under the Drugs and Pharmaceuticals policy. As noted above, the policies say advertisers do not need authorization to “educate, advocate or give public service announcements,” but Meta told EFF authorization is needed to promote posts discussing “medical efficacy, legality, accessibility, affordability, and scientific merits.” Users should be able to more easily determine what content falls on each side of that line.  

Second, Meta should clarify when its Social Issues policy applies. Does discussing abortion at all trigger its application? Meta says the policy excludes posts primarily advertising a service, yet this is not what survey respondent Lynsey Bourke experienced. She runs the Instagram account Rouge Doulas, a global abortion support collective and doula training school. Rouge Doulas had a paid post removed under this very policy for advertising something that is clearly a service: its doula training program called “Rouge Abortion Doula School.” The policy’s current ambiguity makes it difficult for advocates to create compliant content with confidence.

Third, and as EFF has previously argued, Meta should ensure its automated system is not over-moderating. Meta must also provide a meaningful appeals process for when errors inevitably occur. Automated systems are blunt tools and are bound to make mistakes on complex topics like abortion. But simply using an image of a pill on an educational post shouldn’t automatically trigger takedowns. Improving automated moderation will help correct the cascading effect of incorrect Community Standards flags triggering advertising restrictions. 

With clearer policies, better moderation, and a commitment to transparency, Meta can make it easier for accounts to share and boost vital reproductive health information. 

This is the seventh post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion   

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Lisa Femia

Protecting Access to the Law—and Beneficial Uses of AI

1 week 4 days ago

As the first copyright cases concerning AI reach appeals courts, EFF wants to protect important, beneficial uses of this technology—including AI for legal research. That’s why we weighed in on the long-running case of Thomson Reuters v. ROSS Intelligence. This case raises at least two important issues: the use of (possibly) copyrighted material to train a machine learning AI system, and public access to legal texts.  

ROSS Intelligence was a legal research startup that built an AI-based tool for locating judges’ written opinions based on natural language queries—a competitor to ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. To build its tool, ROSS hired another firm to read through thousands of the “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. ROSS used those paraphrases to train its tool. Importantly, the ROSS tool didn’t output any West headnotes, or even the paraphrases of those headnotes—it simply directed the user to the original judges’ decisions. Still, Thomson sued ROSS for copyright infringement, arguing that using the headnotes without permission was illegal.  

Early decisions in the suit were encouraging. EFF wrote about how the court allowed ROSS to bring an antitrust counterclaim against Thomson Reuters, letting them try to prove that Thomson was abusing monopoly power. And the trial judge initially ruled that ROSS’s use of the West headnotes was fair use under copyright law. 

The case then took turns for the worse. ROSS was unable to prove its antitrust claim. The trial judge issued a new opinion reversing his earlier decision and finding that ROSS’s use was not fair but rather infringed Thomson’s copyrights. And in the meantime, ROSS had gone out of business (though it continues to defend itself in court).  

The court’s new decision on copyright was particularly worrisome. It ruled that West headnotes—a few lines of text copying or summarizing a single legal conclusion from a judge’s written opinion—could be copyrighted, and that using them to train the ROSS tool was not fair use, in part because ROSS was a competitor to Thomson Reuters. And the court rejected ROSS’s attempt to avoid any illegal copying by using a “clean room” procedure often used in software development. The decision also threatens to limit the public’s access to legal texts. 

EFF weighed in with an amicus brief joined by the American Library Association, the Association of Research Libraries, the Internet Archive, Public Knowledge, and Public.Resource.Org. We argued that West headnotes are not copyrightable in the first place, since they simply restate individual points from judges’ opinions with no meaningful creative contributions. And even if copyright does attach to the headnotes, we argued, the source material is entirely factual statements about what the law is, and West’s contribution was minimal, so fair use should have tipped in ROSS’s favor. The trial judge had found that the factual nature of the headnotes favored ROSS, but dismissed this factor as unimportant, effectively writing it out of the law. 

This case is one of the first to touch on copyright and AI, and is likely to influence many of the other cases that are already pending (with more being filed all the time). That’s why we’re trying to help the appeals court get this one right. The law should encourage the creation of AI tools to digest and identify facts for use by researchers, including facts about the law. 

Mitch Stoltz

Towards the 10th Summit of the Americas: Concerns and Recommendations from Civil Society

1 week 4 days ago

This post is an adapted version of the article originally published at Silla Vacía 

Heads of state and governments of the Americas will gather this December at the Tenth Summit of the Americas in the Dominican Republic to discuss challenges and opportunities facing the region’s nations. As part of the Summit of the Americas’ Process, which had its first meeting in 1994, the theme of this year’s summit is "Building a Secure and Sustainable Hemisphere with Shared Prosperity.”  

More than twenty civil society organizations, including EFF, released a joint contribution ahead of the summit addressing the intersection between technology and human rights. Although the meeting's concept paper is silent about the role of digital technologies in the scope of this year's summit, the joint contribution stresses that the development and use of technologies is a cross-cutting issue and will likely be integrated into policies and actions agreed upon at the meeting.  
 
Human Security, Its Core Dimensions, and Digital Technologies 
 
The concept paper indicates that people in the Americas, like the rest of the world, are living in times of uncertainty and geopolitical, socioeconomic, and environmental challenges that require urgent actions to ensure human security in multiple dimensions. It identifies four key areas: citizen security, food security, energy security, and water security. 
 
The potential of digital technologies cuts across these areas of concern and will very likely be considered in the measures, plans, and policies that states take up in the context of the summit, both at the national level and through regional cooperation. Yet, when harnessing the potential of emerging technologies, their challenges also surface. For example, AI algorithms can help predict demand peaks and manage energy flows in real time on power grids, but the infrastructure required for the growing and massive operation of AI systems itself poses challenges to energy security. 
 
In Latin America, the imperative of safeguarding rights in the face of already documented risks and harmful impacts stands out particularly in citizen security. The abuse of surveillance powers, enhanced by digital technologies, is a recurring and widespread problem in the region.  

It is intertwined with deep historical roots of a culture of secrecy and permissiveness that obstructs implementing robust privacy safeguards, effective independent oversight, and adequate remedies for violations. The proposal in the concept paper for creating a Hemispheric Platform of Action for Citizen and Community Security cannot ignore—and above all, must not reinforce—these problems. 
 
It is crucial that the notion of security embedded in the Tenth Summit's focus on human security be based on human development, the protection of rights, and the promotion of social well-being, especially for historically discriminated against groups. It is also essential that it moves away from securitization and militarization, which have been used for social control, silencing dissent, harassing human rights defenders and community leaders, and restricting the rights and guarantees of migrants and people in situations of mobility. 
 
Toward Regional Commitments Anchored in Human Rights 
 
In light of these concerns, the joint contribution signed by EFF, Derechos Digitales, Wikimedia Foundation, CELE, ARTICLE 19 – Office for Mexico and Central America, among other civil society organizations, addresses the following: 

-- The importance of strengthening the digital civic space, which requires robust digital infrastructure and policies for connectivity and digital inclusion, as well as civic participation and transparency in the formulation of public policies. 

-- Challenges posed by the growing surveillance capabilities of states in the region through the increasing adoption of ever more intrusive technologies and practices without necessary safeguards.  

-- State obligations established under the Inter-American Human Rights System and key standards affirmed by the Inter-American Court in the case of Members of the Jose Alvear Restrepo Lawyers Collective (CAJAR) v. Colombia.  

-- A perspective on state digitalization and innovation centered on human rights, based on thorough analysis of current problems and gaps and their detrimental impacts on people. The insufficiency or absence of meaningful mechanisms for public participation, transparency, and evaluation are striking features of various experiences across countries in the Americas.  

Finally, the contribution makes recommendations for regional cooperation, promoting shared solutions and joint efforts at the regional level anchored in human rights, justice, and inclusion. 

We hope the joint contribution reinforces a human rights-based perspective across the debates and agreements at the summit. When security-related abuses abound facilitated by digital technologies, regional cooperation towards shared prosperity must take into account these risks and put justice and people's well-being at the center of any unfolding initiatives. 

Veridiana Alimonti

EFF Urges Virginia Court of Appeals to Require Search Warrants to Access ALPR Databases

1 week 5 days ago

This post was co-authored by EFF legal intern Olivia Miller.

For most Americans—driving is a part of everyday life. Practically speaking, many of us drive to work, school, play, and anywhere in between. Not only do we visit places that give insights into our personal lives, but we sometimes use vehicles as a mode of displaying information about our political beliefs, socioeconomic status, and other intimate details.

All of this personal activity can be tracked and identified through Automatic License Plate Reader (ALPR) data—a popular surveillance tool used by law enforcement agencies across the country. That’s why, in an amicus brief filed with the Virginia Court of Appeals, EFF, the ACLU of Virginia, and NACDL urged the court to require police to seek a warrant before searching ALPR data.

In Commonwealth v. Church, a police officer in Norfolk, Virginia searched license plate data without a warrant—not to prove that defendant Ronnie Church was at the scene of the crime, but merely to try to show he had a “guilty mind.” The lower court, in a one-page ruling relying on Commonwealth v. Bell, held this warrantless search violated the Fourth Amendment and suppressed the ALPR evidence. We argued the appellate court should uphold this decision.

Like the cellphone location data the Supreme Court protected in Carpenter v. United States, ALPR data threatens peoples’ privacy because it is collected indiscriminately over time and can provide police with a detailed picture of a person’s movements. ALPR data includes photos of license plates, vehicle make and model, any distinctive features of the vehicle, and precise time and location information. Once an ALPR logs a car’s data, the information is uploaded to the cloud and made accessible to law enforcement agencies at the local, state, and federal level—creating a near real-time tracking tool that can follow individuals across vast distances.

Think police only use ALPRs to track suspected criminals? Think again. ALPRs are ubiquitous; every car traveling into the camera’s view generates a detailed dataset, regardless of any suspected criminal activity. In fact, a survey of 173 law enforcement agencies employing ALPRs nationwide revealed that 99.5% of scans belonged to people who had no association to crime.

Norfolk County, Virginia, is home to over 170 ALPR cameras operated by Flock, a surveillance company that maintains over 83,000 ALPRs nationwide. The resulting surveillance network is so large that Norfolk county’s police chief suggested “it would be difficult to drive any distance and not be recorded by one.”

Recent and near-horizon advancements in Flock’s products will continue to threaten our privacy and further the surveillance state. For example, Flock’s ALPR data has been used for immigration raids, to track individuals seeking abortion-related care, to conduct fishing expeditions, and to identify relationships between people who may be traveling together but in different cars. With the help of artificial intelligence, ALPR databases could be aggregated with other information from data breaches and data brokers, to create “people lookup tools.” Even public safety advocates and law enforcement, like the International Association of Chiefs of Police, have warned that ALPR tech creates a risk “that individuals will become more cautious in their exercise of their protected rights of expression, protest, association, political participation because they consider themselves under constant surveillance.”  

This is why a warrant requirement for ALPR data is so important. As the Virginia trial court previously found in Bell, prolonged tracking of public movements with surveillance invades peoples’ reasonable expectation of privacy in the entirety of their movements. Recent Fourth Amendment jurisprudence, including Carpenter and Leaders of a Beautiful Struggle from the federal Fourth Circuit Court of Appeals favors a warrant requirement as well. Like the technologies at issue in those cases, ALPRs give police the ability to chronicle movements in a “detailed, encyclopedic” record, akin to “attaching an ankle monitor to every person in the city.”  

The Virginia Court of Appeals has a chance to draw a clear line on warrantless ALPR surveillance, and to tell Norfolk PD what the Fourth Amendment already says: come back with a warrant.

Jennifer Lynch

Chat Control Is Back on the Menu in the EU. It Still Must Be Stopped

1 week 5 days ago

The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.

Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.

Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.

This is absurd.

We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.

If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.

This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.

Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”

Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.

We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.

Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.

Further reading:

Thorin Klosowski

After Years Behind Bars, Alaa Is Free at Last

1 week 5 days ago

Alaa Abd El Fattah is finally free and at home with his family. On September 22, it was announced that Egyptian President Abdel Fattah al-Sisi had issued a pardon for Alaa’s release after six years in prison. One day later, the BBC shared video of Alaa dancing with his family in their Cairo home and hugging his mother Laila and sister Sanaa, as well as other visitors. 

Alaa's sister, Mona Seif, posted on X: "An exceptionally kind day. Alaa is free."

Alaa has spent most of the last decade behind bars, punished for little more than his words. In June 2014, Egypt accused him of violating its protest law and attacking a police officer. He was convicted in absentia and sentenced to fifteen years in prison, after being prohibited from entering the courthouse. Following an appeal, Alaa was granted a retrial, and sentenced in February 2015 to five years in prison. In 2019, he was finally released, first into police custody then to his family. As part of his parole, he was told he would have to spend every night of the next five years at a police station, but six months later—on September 29, 2019—Alaa was re-arrested in a massive sweep of activists and charged with spreading false news and belonging to a terrorist organisation after sharing a Facebook post about torture in Egypt.

Despite that sentence effectively ending on September 29, 2024, one year ago today, Egyptian authorities continued his detention, stating that he would be released in January 2027—violating both international legal norms and Egypt’s own domestic law. As Amnesty International reported, Alaa faced inhumane conditions during his imprisonment, “including denial of access to lawyers, consular visits, fresh air, and sunlight,” and his family repeatedly spoke of concerns about his health, particularly during periods in which he engaged in hunger strike.

When Egyptian authorities failed to release Alaa last year, his mother, Laila Soueif, launched a hunger strike. Her action stretched to an astonishing 287 days, during which she was hospitalized twice in London and nearly lost her life. She continued until July of this year, when she finally ended the strike following direct commitments from UK officials that Alaa would be freed.

Throughout this time, a broad coalition, including EFF, rallied around Alaa: international human rights organizations, senior UK parliamentarians, former British Ambassador John Casson, and fellow former political prisoner Nazanin Zaghari-Ratcliffe all lent their voices. Celebrities joined the call, while the UN Working Group on Arbitrary Detention declared his imprisonment unlawful and demanded his release. This groundswell of solidarity was decisive in securing his release.

Alaa’s release is an extraordinary relief for his family and all who have campaigned on his behalf. EFF wholeheartedly celebrates Alaa’s freedom and reunification with his family.

But we must remain vigilant. Alaa must be allowed to travel to the UK to be reunited with his son Khaled, who currently lives with his mother and attends school there. Furthermore, we continue to press for the release of those who remain imprisoned for nothing more than exercising their right to speak.

Electronic Frontier Foundation

Fair Use Protects Everyone—Even the Disney Corporation

2 weeks 1 day ago

Jimmy Kimmel has been in the news a lot recently, which means the ongoing lawsuit against him by perennial late-night punching bag/convicted fraudster/former congressman George Santos flew under the radar. But what happened in that case is an essential illustration of the limits of both copyright law and the “fine print” terms of service on websites and apps. 

What happened was this: Kimmel and his staff saw that Santos was on Cameo, which allows people to purchase short videos from various public figures with requested language. Usually it’s something like “happy birthday” or “happy retirement.” In the case of Kimmel and his writers, they set out to see if there was anything they couldn’t get Santos to say on Cameo. For this to work, they obviously didn’t disclose that it was Jimmy Kimmel Live! asking for the videos.  

Santos did not like the segment, which aired clips of these videos, called “Will Santos Say It?”.  He sued Kimmel, ABC, and ABC’s parent company, Disney. He alleged both copyright infringement and breach of contract—the contract in this case being Cameo’s terms of service. He lost on all counts, twice: his case was dismissed at the district court level, and then that dismissal was upheld by an appeals court. 

On the copyright claim, Kimmel and Disney argued and won on the grounds of fair use. The court cited precedent that fair use excuses what might be strictly seen as infringement if such a finding would “stifle the very creativity” that copyright is meant to promote. In this case, the use of the videos was part of the ongoing commentary by Jimmy Kimmel Live! around whether there was anything Santos wouldn’t say for money. Santos tried to argue that since this was their purpose from the outset, the use wasn’t transformative. Which... isn’t how it works. Santos’ purpose was, presumably, to fulfill a request sent through the app. The show’s purpose was to collect enough examples of a behavior to show a pattern and comment on it.  

Santos tried to say that their not disclosing what the reason was invalidated the fair use argument because it was “deceptive.” But the court found that the record didn’t show that the deception was designed to replace the market for Santos’s Cameos. It bears repeating: commenting on the quality of a product or the person making it is not legally actionable interference with a business. If someone tells you that a movie, book, or, yes, Cameo isn’t worth anything because of its ubiquity or quality and shows you examples, that’s not a deceptive business practice. In fact, undercover quality checks and reviews are fairly standard practices! Is this a funnier and more entertaining example than a restaurant review? Yes. That doesn’t make it unprotected by fair use.  

It’s nice to have this case as a reminder that, despite everything, the major studios often argue, fair use protects everyone, including them. Don’t hold your breath on them remembering this the next time someone tries to make a YouTube review of a Hollywood movie using clips.  

Another claim from this case that is less obvious but just as important involves the Cameo terms of service. We often see contracts being used to restrict people’s fair use rights. Cameo offers different kinds of videos for purchase. The most well-known comes with a personal use license, the “happy birthdays,” and so on. They also offer a “commercial” use license, presumably if you want to use the videos to generate revenue, like you do with an ad or paid endorsement. However, in this case, the court found that the terms of service are a contract between a customer and Cameo, not between the customer and the video maker. Cameo’s terms of service explicitly lay out when their terms apply to the person selling a video, and they don’t create a situation where Santos can use those terms to sue Jimmy Kimmel Live! According to the court, the terms don’t even imply a shared understanding and contract between the two parties.  

It's so rare to find a situation where the wall of text that most terms of service consist of actually helps protect free expression; it’s a pleasant surprise to see it here.  

In general, we at EFF hate it when these kinds of contracts—you know the ones, where you hit accept after scrolling for ages just so you can use the app—are used to constrain users’ rights. Fair use is supposed to protect us all from overly strict interpretations of copyright law, but abusive terms of service can erode those rights. We’ll keep fighting for those rights and the people who use them, even if the one exercising fair use is Disney.  

Katharine Trendacosta

The Abortion Hotline Meta Wants to Go Dark

2 weeks 1 day ago

This is the sixth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When we started our Stop Censoring Abortion campaign, we heard from activists, advocacy organizations, researchers, and even healthcare providers who had all experienced having abortion-related content removed or suppressed on social media. One of the submissions we received was from an organization called the Miscarriage and Abortion Hotline.

The Miscarriage and Abortion Hotline (M+A Hotline) formed in 2019, is staffed by a team of healthcare providers who wanted to provide free and confidential “expert advice on various aspects of miscarriage and abortion, ensuring individuals receive accurate information and compassionate support throughout their journey.” By 2022, the hotline was receiving between 25 to 45 calls and texts a day. 

Like many reproductive health, rights, and justice groups, the M+A Hotline is active on social media, sharing posts that affirm the voices and experiences of abortion seekers, assert the safety of medication abortion, and spread the word about the expert support that the hotline offers. However, in late March of this year, the M+A Hotline’s Instagram suddenly had numerous posts taken down and was hit with restrictions that prevented the account from starting or joining livestreams or creating ads until June 25, 2025.

Screenshots provided to EFF from M+A Hotline

The reason behind the restrictions and takedowns, according to Meta, was that the M+A Hotline’s Instagram account failed to follow Meta’s guidelines on the sale of illegal or regulated goods. The “guidelines” refer to Meta’s Community Standards which dictate the types of content that are allowed on Facebook, Instagram, Messenger, and Threads. But according to Meta, it is not against these Community Standards to provide guidance on how to legally access pharmaceutical drugs, and this is treated differently than an offer to buy, sell, or trade pharmaceuticals (though there are additional compliance requirements for paid ads). 

Under these rules, the M+A Hotline’s content should have been fine: The Hotline does not sell medication abortion and simply educates on the efficacy and safety of medication abortion while providing guidance on how abortion seekers could legally access the pills. Despite this, around 10 posts from the account were removed by Instagram, none of which were ads.

For how little the topic is mentioned in these Standards, content about abortion seems to face extremely high scrutiny from Meta.

In a letter to Amnesty International in February 2024, Meta publicly clarified that organic content on its platforms that educates users about medication abortion is not in violation of the Community Standards. The company claims that the policies are “based on feedback from people and the advice of experts in fields like technology, public safety and human rights.” The Community Standards are thorough and there are sections covering everything from bullying and harassment to account integrity to restricted goods and services. Notably, within the several webpages that make up the Community Standards, there are very few mentions of the words “abortion” and “reproductive health.” For how little the topic is mentioned in these Standards, content about abortion seems to face extremely high scrutiny from Meta.

Screenshots provided to EFF from M+A Hotline

Not only were posts removed, but even after further review, many were not restored. The M+A Hotline was once again told that their content violates the Community Standards on drugs. While it’s understandable that moderation systems may make mistakes, it’s unacceptable for those mistakes to be repeated consistently with little transparency or direct communication with the users whose speech is being restricted and erased. This problem is only made worse by lack of helpful recourse. As seen here, even when users request review and identify these moderation errors, Meta may still refuse to restore posts that are permitted under the Community Standards.

The removal of the M+A Hotline’s educational content demonstrates that Meta must be more accurate, consistent, and transparent in the enforcement of their Community Standards, especially in regard to reproductive health information. Informing users that medical professionals are available to support those navigating a miscarriage or abortion is plainly not an attempt to buy or sell pharmaceutical drugs. Meta must clearly defineand then fairly enforce–what is and isn’t permitted under its Standards. This includes ensuring there is a meaningful way to quickly rectify any moderation errors through the review process. 

At a time when attacks on online access to information—and particularly abortion information—are intensifying, Meta must not exacerbate the problem by silencing healthcare providers and suppressing vital health information. We must all continue to fight back against online censorship.

 This is the sixth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

Kenyatta Thomas

California: Tweet at Governor Newsom to Get A.B. 566 Signed Into Law

2 weeks 1 day ago

We need your help to make a common-sense bill into California law. Despite the fact that California has one of the nation’s most comprehensive data privacy laws, it’s not always easy for people to exercise those privacy rights. A.B. 566 intends to make it easy by directing browsers to give all their users the option to tell companies they don’t want personal information that’s collected about them on the internet to be sold or shared. Now, we just need Governor Gavin Newsom to sign it into law by October 13, 2025, and this toolkit will help us put on the pressure. Tweet at Gov. Gavin Newsom and help us get A.B. 566 signed into law!

First, pick your platform of choice. Reach Gov. Newsom at any of his social media handles:

Then, pick a message that resonates with you. Or, feel free to remix!

Sample Posts

  • It should be easy for Californians to exercise our rights under the California Consumer Privacy Act, but major internet browser companies are making it difficult for us to do that. @CAgovernor, sign AB 566 and give power to the consumers to protect their privacy!
  • We are living in a time of mass surveillance and tracking. Californian consumers should be able to easily control their privacy and AB 566 would make that possible. @CAgovernor, sign AB 566 and ensure that millions of Californians can opt out of the sale and sharing of their private information!
  • People seeking abortion care, immigrants, and LGBTQ+ people are at risk of bad actors using their online activity against them. @CAgovernor could sign AB 566 and protect the privacy of vulnerable communities and all Californians.
  • AB 566 gives Californians a practical way to use their right to opt-out of websites selling or sharing their private info. @CAgovernor can sign it and give consumers power over their privacy choices under the California Consumer Privacy Act.
  • Hey @CAgovernor! AB 566 makes it easy for Californians to tell companies what they want to happen with their own private information. Sign it and make the California Consumer Privacy Act more user-friendly!
  • Companies haven’t made it easy for Californians to tell companies not to sell or share their personal information. We need AB 566 so that browsers MUST give users the option to easily opt out of this data sharing. @CAgovernor, sign AB 566!
  • Major browsers have made it hard for Californians to opt out of the share and sale of their private info. Right now, consumers must individually opt out at every website they visit. AB 566 can change that by requiring browsers to create one single opt-out preference, but @CAgovernor MUST sign it!
  • It should be easy for Californians to opt out of the share and sale of their private info, such as health info, immigration status, and political affiliation, but browsers have made it difficult. @CAgovernor can sign AB 566 and give power to consumers to more easily opt out of this data sharing.
  • Right now, if a Californian wants to tell companies not to sell or share their info, they must go through the processes set up by each company, ONE BY ONE, to opt out of data sharing. AB 566 can remove that burden. @CAgovernor, sign AB 566 to empower consumers!
  • Industry groups who want to keep the scales tipped in favor of corporations who want to profit off the sale of our private info have lobbied heavily against AB 566, a bill that will make it easy for Californians to tell companies what they want to happen with their own info. @CAgovernor—sign it!
Kenyatta Thomas

Yes to California’s “No Robo Bosses Act”

2 weeks 2 days ago

California’s Governor should sign S.B. 7, a common-sense bill to end some of the harshest consequences of automated abuse at work. EFF is proud to join dozens of labor, digital rights, and other advocates in support of the “No Robo Bosses Act.”

Algorithmic decision-making is a growing threat to workers. Bosses are using AI to assess the body language and voice tone of job candidates. They’re using algorithms to predict when employees are organizing a union or planning to quit. They’re automating choices about who gets fired. And these employment algorithms often discriminate based on gender, race, and other protected statuses. Fortunately, many advocates are resisting.

What the Bill Does

S.B. 7 is a strong step in the right direction. It addresses “automated decision systems” (ADS) across the full landscape of employment. It applies to bosses in the private and government sectors, and it protects workers who are employees and contractors. It addresses all manner of employment decisions that involve automated decisionmaking, including hiring, wages, hours, duties, promotion, discipline, and termination. It covers bosses using ADS to assist or replace a person making a decision about another person.

Algorithmic decision-making is a growing threat to workers.

The bill requires employers to be transparent when they rely on ADS. Before using it to make a decision about a job applicant or current worker, a boss must notify them about the use of ADS. The notice must be in a stand-alone, plain language communication. The notice to a current worker must disclose the types of decisions subject to ADS, and a boss cannot use an ADS for an undisclosed purpose. Further, the notice to a current worker must disclose information about how the ADS works, including what information goes in and how it arrives at its decision (such as whether some factors are weighed more heavily than others).

The bill provides some due process to current workers who face discipline or termination based on the ADS. A boss cannot fire or punish a worker based solely on ADS. Before a boss does so based primarily on ADS, they must ensure a person reviews both the ADS output and other relevant information. A boss must also notify the affected worker of such use of ADS. A boss cannot use customer ratings as the only or primary input for such decisions. And every worker can obtain a copy of the most recent year of their own data that their boss might use as ADS input to punish or fire them.

Other provisions of the bill will further protect workers. A boss must maintain an updated list of all ADS it currently uses. A boss cannot use ADS to violate the law, to infer whether a worker is a member of a protected class, or to target a worker for exercising their labor and other rights. Further, a boss cannot retaliate against a worker who exercises their rights under this new law. Local laws are not preempted, so our cities and counties are free to enact additional protections.

Next Steps

The “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring.

EFF has long been fighting such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

Adam Schwartz

Meta is Removing Abortion Advocates' Accounts Without Warning

2 weeks 3 days ago

This is the fifth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When the team at Women Help Women signed into Instagram last winter, they were met with a distressing surprise: without warning, Meta had disabled their account. The abortion advocacy non-profit organization found itself suddenly cut off from its tens of thousands of followers and with limited recourse. Meta claimed Women Help Women had violated its Community Standards on “guns, drugs, and other restricted goods,” but the organization told EFF it uses Instagram only to communicate about safe abortion practices, including sharing educational content and messages aimed at reducing stigma. Eventually, Women Help Women was able to restore its account—but only after launching a public campaign and receiving national news coverage

Unfortunately, Women Help Women’s experience is not unique. Around a quarter of our Stop Censoring Abortion campaign submissions reported that their entire account or page had been disabled or taken down after sharing abortion information—primarily on Meta platforms. This troubling pattern indicates that the censorship crisis goes beyond content removal. Accounts providing crucial reproductive health information are disappearing, often without warning, cutting users off from their communities and followers entirely.

whw_screenshot.jpeg

What's worse, Meta appears to be imposing these negative account actions without clearly adhering to its own enforcement policies. Meta’s own Transparency Center stipulates that an account should receive multiple Community Standards violations or warnings before it is restricted or disabled. Yet many affected users told EFF they experienced negative account actions without any warning at all, or after only one alleged violation (many of which were incorrectly flagged, as we’ve explained elsewhere in this series). 

While Meta clearly has the right to remove accounts from its platforms, disabling or banning an account is an extreme measure. It completely silences a user, cutting off communication with their followers and preventing them from sharing any information, let alone abortion information. Because of this severity, Meta should be extremely careful to ensure fairness and accuracy when disabling or removing accounts. Rules governing account removal should be transparent and easy to understand, and Meta must enforce these policies consistently across different users and categories of content. But as our Stop Censoring Abortion results demonstrate, this isn't happening for many accounts sharing abortion information.  

Meta's Maze of Enforcement Policies 

If you navigate to Meta’s Transparency Center, you’ll find a page titled “How Meta enforces its policies.” This page contains a web of intersecting policies on when Meta will restrict accounts, disable accounts, and remove pages and groups. These policies overlap but don’t directly refer to each other, making it trickier for users to piece together how enforcement happens. 

At the heart of Meta's enforcement process is a strike system. Users receive strikes for posting content that violates Meta’s Community Standards. But not all Community Standards violations result in strikes, and whether Meta applies one depends on the “severity of the content” and the “context in which it was shared.” Meta provides little additional guidance on what violations are severe enough to amount to a strike or how context affects this assessment.  

According to Meta's Restricting Accounts policy, for most violations, 1 strike should only result in a warning—not any action against the account. How additional strikes affect an account differs between Facebook and Instagram (but Meta provides no specific guidance for Threads). Facebook relies on a progressive system, where additional strikes lead to increasing restrictions. Enforcement on Instagram is more opaque and leaves more to Meta’s discretion. Meta still counts strikes on Instagram, but it does not follow the same escalating structure of restrictions as it does on Facebook. 

Despite some vagueness in these policies, Meta is quite clear about one thing: On both Facebook and Instagram, an account should only be disabled or removed after “repeated” violations, warnings, or strikes. Meta states this multiple times throughout its enforcement policies. Its Disabling Accounts policy suggests that generally, an account needs to receive at least 5 strikes for Meta to disable or remove it from the platform. The only caveat is for severe violations, such as posting child sexual exploitation content or violating the dangerous individuals and organizations policy. In those extreme cases, Meta may disable an account after just one violation. 

Meta’s Practices Don’t Match Its Policies 

Our survey results detailed a different reality. Many survey respondents told EFF that Meta disabled or removed their account without warning and without indication that they had received repeated strikes.  It’s important to note that Meta does not have a unique enforcement process for prescription drug or abortion-related content. When EFF asked Meta about this issue, Meta confirmed that "enforcement actions on prescription drugs are subject to Meta's standard enforcement policies.” 

So here are a couple other possible explanations for this disconnect—each of them troubling in their own way:

Meta is Ignoring Its Own Strike System 

If Meta is taking down accounts without warning or after only one alleged Community Standards violation, the company is failing to follow its own strike system. This makes enforcement arbitrary and denies users the opportunity for correction that Meta's system supposedly provides. It’s also especially problematic for abortion advocates, given that Meta has been incorrectly flagging educational abortion content as violating its Community Standards. This means that a single content moderation error could result not only in the post coming down, but the entire account too.  

This may be what happened to Emory University’s RISE Center for Reproductive Health Research (a story we described in more detail earlier in this series). After sharing an educational post about mifepristone, RISE’s Instagram account was suddenly disabled. RISE received no earlier warnings from Meta before its account went dark. When RISE was finally able to get back into its account, it discovered only that this single post had been flagged. Again, according to Meta's own policies, one strike should only result in a warning. But this isn’t what happened here. 

Similarly, the Tamtang Foundation, an abortion advocacy organization based in Thailand, had its Facebook account suddenly disabled earlier this year. Tamtang told EFF it had received a warning on only one flagged post that it had posted 10 months prior to its account being taken down. It received none of the other progressive strike restrictions Meta claims to apply Facebook accounts. 

tamtang_screenshot.jpg

Meta is Misclassifying Educational Content as "Extreme Violations" 

If Meta is accurately following its strike policy but still disabling accounts after only one violation, this points to an even more concerning possibility. Meta’s content moderation system may be categorizing educational abortion information as severe enough to warrant immediate disabling, treating university research posts and clinic educational materials as equivalent to child exploitation or terrorist content.  

This would be a fundamental and dangerous mischaracterization of legitimate medical information, and it is, we hope, unlikely. But it’s unfortunately not outside the realm of possibility. We already wrote about a similar disturbing mischaracterization earlier in this series. 

Users Are Unknowingly Receiving Multiple Strikes 

Finally, Meta may be giving users multiple strikes without notifying them. This raises several serious concerns.

First is the lack of transparency. Meta explicitly states in its "Restricting Accounts" policy that it will notify users when it “remove[s] your content or add[s] restrictions to your account, Page or group.” This policy is failing if users are not receiving these notifications and are not made aware there’s an issue with their account. 

It may also mean that Meta’s policies themselves are too vague to provide meaningful guidance to users. This lack of clarity is harmful. If users don’t know what's happening to their accounts, they can’t appeal Meta’s content moderation decisions, adjust their content, or understand Meta's enforcement boundaries moving forward. 

Finally—and most troubling—if Meta is indeed disabling accounts that share abortion information for receiving multiple violations, this points to an even broader censorship crisis. Users may not be aware just how many informational abortion-related posts are being incorrectly flagged and counted as strikes. This is especially concerning given that Meta places a one-year time limit on strikes, meaning the multiple alleged violations could not have accumulated over multiple years.  

The Broader Censorship Crisis 

These account suspensions represent just one facet of Meta's censorship of reproductive health information documented by our Stop Censoring Abortion campaign. When combined with post removals, shadowbanning, and content restrictions, the message is clear: Meta platforms are increasingly unfriendly environments for abortion advocacy and education. 

If Meta wants to practice what it preaches, then it must reform its enforcement policies to provide clear, transparent guidelines on when and how strikes apply, and then consistently and accurately apply those policies. Accounts should not be taken down for only one alleged violation when the policies state otherwise.  

The stakes couldn't be higher. In a post-Roe landscape where access to accurate reproductive health information is more crucial than ever, Meta's enforcement system is silencing the very voices communities need most. 

This is the fifth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion  

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Lisa Femia

Governor Newsom Should Make it Easier to Exercise Our Privacy Rights

2 weeks 3 days ago

California has one of the nation’s most comprehensive consumer data privacy laws. But it’s not always easy for people to exercise those privacy rights. That’s why we supported Assemblymember Josh Lowenthal’s A.B. 566 throughout the legislative session and are now asking California Governor Gavin Newsom to sign it into law. 

The easier it is to exercise your rights, the more power you have.  

A.B. 566 does a very simple thing. It directs browsers—such as Google’s Chrome, Apple’s Safari, Microsoft’s Edge or Mozilla’s Firefox—to give all their users the option to tell companies they don't want companies to  to sell or share personal information  that’s collected about them on the internet. In other words: it makes it easy for Californians to tell companies what they want to happen with their own information.

By making it easy to use tools that allow you to send these sorts of signals to companies’ websites, A.B. 566 makes the California Consumer Privacy Act more user-friendly. And the easier it is to exercise your rights, the more power you have.  

This is a necessary step, because even though the CCPA gives all people in California the right to tell companies not to sell or share their personal information, companies have not made it easy to exercise this right. Right now, someone who wants to make these requests has to go through the processes set up by each company that may collect their information individually. Companies have also often made it pretty hard to make, or even find out how to make, these requests. Giving people the option for an easier way to communicate how they want companies to treat their personal information helps rebalance the often-lopsided relationship between the two.

Industry groups who want to keep the scales tipped firmly in the favor of corporations have lobbied heavily against A.B. 566. But we urge Gov. Newsom not to listen to those who want to it to remain difficult for people to exercise their CCPA rights. EFF’s technologists, lawyers, and advocates think A.B. 566 empowers consumers without imposing regulations that would limit innovation. We think Californians should have easy tools to tell companies how to deal with their information, and urge Gov. Newsom to sign this bill. 

Hayley Tsukayama

Safeguarding Human Rights Must Be Integral to the ICC Office of the Prosecutor’s Approach to Tech-Enabled Crimes

2 weeks 4 days ago

This is Part I of a two-part series on EFF’s comments to the International Criminal Court Office of the Prosecutor (OTP) about its draft policy on cyber-enabled crimes.

As human rights atrocities around the world unfold in the digital age, genocide, war crimes and crimes against humanity are as heinous and wrongful as they were before the advent of AI and social media.

But criminal methods and evidence increasingly involve technology. Think mass digital surveillance of an ethnic or religious community used to persecute them as part of a widespread or systematic attack against civilians, or cyberattacks that disable hospitals or other essential services, causing injury or death.

The International Criminal Court (ICC) Office of the Prosecutor (OTP) intends to use its mandate and powers to investigate and prosecute cyber-enabled crimes within the court's jurisdiction—those covered under the 1989 Rome Statute treaty. The office released for public comment in March 2025 a draft of its proposed policy for how it plans to go about it.

We welcome the OTP draft and urge the OTP to ensure its approach is consistent with internationally recognized human rights, including the rights to free expression, to privacy (with encryption as a vital safeguard), and to fair trial and due process.

We believe those who use digital tools to commit genocide, crimes against humanity, or war crimes should face justice. At the same time, EFF, along with our partner Derechos Digitales, emphasized in comments submitted to the OTP that safeguarding human rights must be integral to its investigations of cyber-enabled crimes.

That’s how we protect survivors, prevent overreach, gather evidence that can withstand judicial scrutiny, and hold perpetrators to account. In a similar context, we’ve opposed abusive domestic cybercrime laws and policing powers that invite censorship, arbitrary surveillance, and other human rights abuses

In this two-part series, we’ll provide background on the ICC and OTP’s draft policy, including what we like about the policy and areas that raise questions.

OTP Defines Cyber-Enabled Crimes

The ICC, established by the Rome Statute, is the permanent international criminal court with jurisdiction over individuals for four core crimes—genocide, crimes against humanity, war crimes, and the crime of aggression. It also exercises jurisdiction over offences against the administration of justice at the court itself. Within the court, the OTP is an independent organization responsible for investigating these crimes and prosecuting them.

The OTP’s draft policy explains how it will apply the statute when crimes are committed or facilitated by digital means, while emphasizing that ordinary cybercrimes (e.g., hacking, fraud, data theft) are outside ICC jurisdiction and remain the responsibility of national courts to address.

The OTP defines “cyber-enabled crime” as crimes within the court’s jurisdiction that are committed or facilitated by technology. “Committed by” covers cases where the online act is the harmful act (or an essential digital contribution), for example, malware is used to disable a hospital and people are injured or die, so the cyber operation can be the attack itself.

A crime is “facilitated by” technology, according to the OTP draft, when digital activity helps someone commit a crime under modes of liability other than direct commission (e.g., ordering, inducing, aiding or abetting), and it doesn’t matter if the main crime was itself committed online. For example, authorities use mass digital surveillance to locate members of a protected group, enabling arrests and abuses as part of a widespread or systematic attack (i.e., persecution).

It further makes clear that the OTP will use its full investigative powers under the Rome Statute—relying on national authorities acting under domestic law and, where possible, on voluntary cooperation from private entities—to secure digital evidence across borders.

Such investigations can be highly intrusive and risk sweeping up data about people beyond the target. Yet many states’ current investigative practices fall short of international human rights standards. The draft should therefore make clear that cooperating states must meet those standards, including by assessing whether they can conduct surveillance in a manner consistent with the rule of law and the right to privacy.

Digital Conduct as Evidence of Rome Statute Crimes

Even when no ICC crime happens entirely online, the OTP says online activity can still be relevant evidence. Digital conduct can help show intent, context, or policies behind abuses (for example, to prove a persecution campaign), and it can also reveal efforts to hide or exploit crimes (like propaganda). In simple terms, online activity can corroborate patterns, link incidents, and support inferences about motive, policy, and scale relevant to these crimes.

The prosecution of such crimes or the use of related evidence must be consistent with internationally recognized human rights standards, including privacy and freedom of expression, the very freedoms that allow human rights defenders, journalists, and ordinary users to document and share evidence of abuses.

In Part II we’ll take a closer look at the substance of our comments about the policy’s strengths and our recommendations for improvements and more clarity.

Karen Gullo

EFF Statement on TikTok Ownership Deal

2 weeks 4 days ago

One of the reasons we opposed the TikTok "ban" is that the First Amendment is supposed to protect us from government using its power to manipulate speech. But as predicted, the TikTok "ban" has only resulted in turning over the platform to the allies of a president who seems to have no respect for the First Amendment.

TikTok was never proven to be a current national security problem, so it's hard to say the sale will alleviate those unproven concerns. And it remains to be seen if the deal places any limits on the new ownership sharing user data with foreign governments or anyone else—the security concern that purportedly justified the forced sale. As for the algorithm, if the concern had been that TikTok could be a conduit for Chinese government propaganda—a concern the Supreme Court declined to even consider—people can now be concerned that TikTok could be a conduit for U.S. government propaganda. An administration official reportedly has said the new TikTok algorithm will be "retrained" with U.S. data to make sure the system is "behaving properly."

David Greene

Going Viral vs. Going Dark: Why Extremism Trends and Abortion Content Gets Censored

2 weeks 4 days ago

This is the fourth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

One of the goals of our Stop Censoring Abortion campaign was to put names, stories, and numbers to the experiences we’d been hearing about: people and organizations having their abortion-related content – or entire accounts – removed or suppressed on social media. In reviewing survey submissions, we found that multiple users reported experiencing shadowbanning. Shadowbanning (or “deranking”) is widely experienced and reported by content creators across various social media platforms, and it’s a phenomenon that those who create content about abortion and sexual and reproductive health know all too well.

Shadowbanning is the often silent suppression of certain types of content or creators in your social media feeds. It’s not something that a U.S-based creator is notified about, but rather something they simply find out when their posts stop getting the same level of engagement that they’re used to, or when people are unable to easily find their account using the platform’s search function. Essentially, it is when a platform or its algorithm decides that other users should see less of a creator or specific topic. Many platforms deny that shadowbanning exists; they will often blame reduced reach of posts on ‘bugs’ in the algorithm. At the same time, companies like Meta have admitted that content is ranked, but much about how this ranking system works remains unknown.  Meta says that there are five content categories that while allowed on its platforms, “may not be eligible for recommendation.” Content discussing abortion pills may fall under the umbrella of “Content that promotes the use of certain regulated products,” but posts that simply affirm abortion as a valid reproductive decision or are of storytellers sharing their experiences don’t match any of the criteria that would make it unable to be recommended by Meta.

Whether a creator relies on a platform for income or uses it to educate the public, shadowbanning can be devastating for the growth of an account. And this practice often seems to disproportionately affect people who are talking about ‘taboo’ topics like sex, abortion, and LGBTQ+ identities, such as Kim Adamski, a sexual health educator who shared her story with our Stop Censoring Abortion project. As you can see in the images below, Kim’s Instagram account does not show up as a suggestion when being searched, and can only be found after typing in the full username.


Earlier this year, the Center for Intimacy Justice shared their report, "The Digital Gag: Suppression of Sexual and Reproductive Health on Meta, TikTok, Amazon, and Google", which found that of the 159 nonprofits, content creators, sex educators, and businesses surveyed, 63% had content removed on Meta platforms and 55% had content removed on TikTok. This suppression is happening at the same time as platforms continue to allow and elevate videos of violence and gore and extremist hateful content. This pattern is troubling and is only becoming more prevalent as people turn to social media to find the information they need to make decisions about their health.

Reproductive rights and sex education have been under attack across the U.S. for decades. Since the Dobbs v. Jackson decision in 2022, 20 states have banned or limited access to abortion. Meanwhile, 16 states don’t require sex education in public schools to be medically accurate, 19 states have laws that stigmatize LGBTQ+ identities in their sex education curricula, and 17 states specifically stigmatize abortion in their sex education curricula.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge.

Online platforms are critical lifelines for people seeking possibly life-saving information about their sexual and reproductive health. We know that when people are unable to find or access the information they need within their communities, they will turn to the internet and social media. This is especially important for abortion-seekers and trans youth living in states where healthcare is being criminalized.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge. Limiting access to this information by suppressing the people and organizations who are providing it is an attack on free expression and a profound threat to freedom of information—principles that these platforms claim to uphold. Now more than ever, we must continue to push back against censorship of sexual and reproductive health information so that the internet can still be a place where all voices are heard and where all can learn.

This is the fourth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

Kenyatta Thomas

That Drone in the Sky Could Be Tracking Your Car

2 weeks 5 days ago

Police are using their drones as flying automated license plate readers (ALPRs), airborne police cameras that make it easier than ever for law enforcement to follow you. 

"The Flock Safety drone, specifically, are flying LPR cameras as well,” Rahul Sidhu, Vice President of Aviation at Flock Safety, recently told a group of potential law enforcement customers interested in drone-as-first-responder (DFR) programs

The integration of Flock Safety’s flagship ALPR technology with its Aerodome drone equipment is a police surveillance combo poised to elevate the privacy threats to civilians caused by both of these invasive technologies as drone adoption expands. 

flock_drone_flying_police_platform.png

A slide from a Flock Safety presentation to Rutherford County Sheriff's Office in North Carolina, obtained via public records, featuring Flock Safety products, including the Aerodome drone and the Wing product, which helps convert surveillance cameras into ALPR systems

The use of DFR programs has grown exponentially. The biggest police technology companies, like Axon, Flock Safety, and Motorola Solutions, are broadening their drone offerings, anticipating that drones could become an important piece of their revenue stream. 

Communities must demand restrictions on how local police use drones and ALPRs, let alone a dangerous hybrid of the two. Otherwise, we can soon expect that a drone will fly to any call for service and capture sensitive location information about every car in its flight path, capturing more ALPR data to add to the already too large databases of our movements. 

ALPR systems typically rely on cameras that have been fixed along roadways or attached to police vehicles. These cameras capture the image of a vehicle, then use artificial intelligence technology to log the license plate, make, model, color, and other unique identifying information, like dents and bumper stickers. This information is usually stored on the manufacturer’s servers and often made available on nationwide sharing networks to police departments from other states and federal agencies, including Immigration and Customs Enforcement. ALPRs are already used by most of the largest police departments in the country, and Flock Safety also now offers the ability for an agency to turn almost any internet-enabled cameras into an ALPR camera. 

ALPRs present a host of problems. ALPR systems vacuum up data—like the make, model, color, and location of vehicles—on people who will never be involved in a crime, used in gridding areas to systematically make a record of when and where vehicles have been. ALPRs routinely make mistakes, causing police to stop the wrong car and terrorize the driver. Officers have abused law enforcement databases in hundreds of cases. Police have used them to track across state lines people seeking legal health procedures. Even when there are laws against sharing data from these tools with other departments, some policing agencies still do.

Drones, meanwhile, give police a view of roofs, backyards, and other fenced areas where cops can’t casually patrol, and their adoption is becoming more common. Companies that sell drones have been helping law enforcement agencies to get certifications from the Federal Aviation Authority (FAA), and recently-implemented changes to the restrictions on flying drones beyond the visual line of sight will make it even easier for police to add this equipment. According to the FAA, since a new DFR waiver process was implemented in May 2025, the FAA has granted more than 410 such waivers, already accounting for almost a third of the approximately 1,400 DFR waivers that have been granted since such programs began in 2018.

Local officials should, of course, be informed that the drones they’re buying are equipped to do such granular surveillance from the sky, but it is not clear that this is happening. While the ALPR feature is available as part of Flock drone acquisitions, some government customers may not realize that to approve a drone from Flock Safety may also mean approving a flying ALPR. And though not every Flock safety drone is currently running the ALPR feature, some departments, like Redondo Beach Police Department, have plans to activate it in the near future. 

ALPRs aren’t the only so-called payloads that can be added to a drone. In addition to the high resolution and thermal cameras with which drones can already be equipped, drone manufacturers and police departments have discussed adding cell-site simulators, weapons, microphones, and other equipment. Communities must mobilize now to keep this runaway surveillance technology under tight control.

When EFF posed questions to Flock Safety about the integration of ALPR and its drones, the company declined to comment.

Mapping, storing, and tracking as much personal information as possible—all without warrants—is where automated police surveillance is heading right now. Flock has previously described its desire to connect ALPR scans to additional information on the person who owns the car, meaning that we don’t live far from a time when police may see your vehicle drive by and quickly learn that it’s your car and a host of other details about you. 

EFF has compiled a list of known drone-using police departments. Find out about your town’s surveillance tools at the Atlas of Surveillance. Know something we don't? Reach out at aos@eff.org.

Beryl Lipton

Companies Must Provide Accurate and Transparent Information to Users When Posts are Removed

3 weeks 1 day ago

This is the third installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

Imagine sharing information about reproductive health care on social media and receiving a message that your content has been removed for violating a policy intended to curb online extremism. That’s exactly what happened to one person using Instagram who shared her story with our Stop Censoring Abortion project.

Meta’s rules for “Dangerous Organizations and Individuals” (DOI) were supposed to be narrow: a way to prevent the platform from being used by terrorist groups, organized crime, and those engaged in violent or criminal activity. But over the years, we’ve seen these rules applied in far broader—and more troubling—ways, with little transparency and significant impact on marginalized voices.

EFF has long warned that the DOI policy is opaque, inconsistently enforced, and prone to overreach. The policy has been critiqued by others for its opacity and propensity to disproportionately censor marginalized groups.

Samantha Shoemaker's post about Plan C was flagged under Meta's policy on dangerous organizations and individuals

Meta has since added examples and clarifications in its Transparency Center to this and other policies, but their implementation still leaves users in the dark about what’s allowed and what isn’t.

The case we received illustrates just how harmful this lack of clarity can be. Samantha Shoemaker, an individual sharing information about abortion care, shared straightforward, facts about accessing abortion pills. Her posts included:

  • A video linking to Plan C’s website, which lists organizations that provide abortion pills in different states.

  • A reshared image from Plan C’s own Instagram account encouraging people to learn about advance provision of abortion pills.

  • A short clip of women talking about their experiences taking abortion pills.
Information Provided to Users Must Be Accurate

Instead of allowing her to facilitate informed discussion, Instagram flagged some of her posts under its “Prescription Drugs” policy, while others were removed under the DOI policy—the same set of rules meant to stop violent extremism from being shared.

We recognize that moderation systems—both human and automated—will make mistakes. But when Meta equates medically accurate, harm-reducing information about abortion with “dangerous organizations,” it underscores a deeper problem: the blunt tools of content moderation disproportionately silence speech that is lawful, important, and often life-saving.

At a time when access to abortion information is already under political attack in the United States and around the world, platforms must be especially careful not to compound the harm. This incident shows how overly broad rules and opaque enforcement can erase valuable speech and disempower users who most need access to knowledge.

And when content does violate the rules, it’s important that users are provided with accurate information as to why. An individual sharing information about health care will undoubtedly be confused or upset by being told that they have violated a policy meant to curb violent extremism. Moderating content responsibly means offering the greatest transparency and clarity to users as possible. As outlined in the Santa Clara Principles on Transparency and Accountability in Content Moderation, users should be able to readily understand:

  • What types of content are prohibited by the company and will be removed, with detailed guidance and examples of permissible and impermissible content;
  • What types of content the company will take action against other than removal, such as algorithmic downranking, with detailed guidance and examples on each type of content and action; and
  • The circumstances under which the company will suspend a user’s account, whether permanently or temporarily.
What You Can Do if Your Content is Removed

If you find your content removed under Meta’s policies, you do have options:

  • Appeal the decision: Every takedown notice should give you the option to appeal within the app. Appeals are sometimes reviewed by a human moderator rather than an automated system.
  • Request Oversight Board review: In certain cases, you can escalate to Meta’s independent Oversight Board, which has the power to overturn takedowns and set policy precedents.
  • Document your case: Save screenshots of takedown notices, appeals, and your original post. This documentation is essential if you want to report the issue to advocacy groups or in future proceedings.
  • Share your story: Projects like Stop Censoring Abortion collect cases of unjust takedowns to build pressure for change. Speaking out, whether to EFF and other advocacy groups or to the media, helps illustrate how policies harm real people.

Abortion is health care. Sharing information about it is not dangerous—it’s necessary. Meta should allow users to share vital information about reproductive care. The company must also ensure that users are provided with clear information about how their policies are being applied and how to appeal seemingly wrongful decisions.

This is the third post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion   

Jillian C. York

Shining a Spotlight on Digital Rights Heroes: EFF Awards 2025

3 weeks 1 day ago

It's been a year full of challenges, but also important victories for digital freedoms. From EFF’s new lawsuit against OPM and DOGE, to launching Rayhunter (our new tool to detect cellular spying), to exposing the censorship of abortion-related content on social media, we’ve been busy! But we’re not the only ones leading the charge. 

On September 10 in San Francisco, we presented the annual EFF Awards to three courageous honorees who are pushing back against unlawful surveillance, championing data privacy, and advancing civil liberties online. This year’s awards went to Just Futures LawErie Meyer, and the Software Freedom Law Center, India

If you missed the celebration in person, you can still watch it live! The full event is posted on YouTube and the Internet Archive, and a transcript of the live captions is also available.  

WATCH NOW

SEE THE EFF AWARDS CEREMONY ON YOUTUBE

Looking Back, Looking Ahead

EFF Executive Director Cindy Cohn opened the evening by reflecting on our victories this past year and reiterated how vital EFF’s mission to protect privacy and free speech is today. She also announced her upcoming departure as Executive Director after a decade in the role (and over 25 years of involvement with EFF!). No need to be too sentimental—Cindy isn’t going far. As we like to say: you can check out at any time, but you never really leave the fight. 

Cindy then welcomed one of EFF’s founders, Mitch Kapor, who joked that he had been “brought out of cold storage” for the occasion. Mitch recalled EFF’s early days, when no one knew exactly how constitutional rights would interact with emerging technologies—but everyone understood the stakes. “We understood that the matter of digital rights were very important,” he reflected. And history has proven them right. 

Honoring Defenders of Digital Freedom

The first award of the night, the EFF Award for Defending Digital Freedoms, went to the Software Freedom Law Center, India (SFLC.IN). Presenting the award, EFF Civil Liberties Director David Greene emphasized the importance of international partners like SFLC.IN, whose local perspectives enrich and strengthen EFF’s own work. 

SFLC.IN is at the forefront of digital rights in India—challenging internet shutdowns, tracking violations of free expression with their Free Speech Tracker, and training lawyers across the country. Accepting the award, SFLC.IN founder Mishi Choudhary reminded us: “These freedoms are not abstract. They are fought for every day by people, by organizations, and by movements.” 

SFLC.IN founder Mishi Choudhary accepts the EFF Award for Defending Digital Freedoms

Next, EFF Staff Attorney Mario Trujillo introduced the winner of the EFF Award for Protecting Americans’ Data, Erie Meyer. Erie has served as CTO of the Federal Trade Commission and Consumer Financial Protection Bureau, and was a founding member of the U.S. Digital Service. Today, she continues to fight for better government technology and safeguards for sensitive data. 

In her remarks, Erie underscored the urgency of protecting personal data at scale: “We need to protect people’s data the same way we protect this country from national security risks. What’s happening right now is like all the data breaches in history rolled into one. ‘Trust me, bro’ is not a way to handle 550 million Americans’ data.” 

Erie Meyer accepts the EFF Award for Protecting Americans’ Data

Finally, EFF General Counsel Jennifer Lynch introduced the EFF Award for Leading Immigration and Surveillance Litigation, presented to Just Futures Law. Co-founder and Executive Director Paromita Shah accepted on behalf of the organization, which works to challenge the ways surveillance disproportionately harms people of color in the U.S. 

“For years, corporations and law enforcement—including ICE—have been testing the legal limits of their tools on communities of color,” Paromita said in her speech. Just Futures Law has fought back, suing the Department of Homeland Security to reveal its use of AI, and defending activists against surveillance technologies like Clearview AI. 

Just Futures Law Executive Director Paromita Shah accepted the EFF Award for Leading Immigration and Surveillance Litigation

Carrying the Work Forward

We’re honored to shine a spotlight on these award winners, who are doing truly fearless and essential work to protect online privacy and free expression. Their courage reminds us that the fight for civil liberties will be won when we work together—across borders, communities, and movements. 

Join the fight and donate today


A heartfelt thank you to all of the EFF members worldwide who make this work possible. Public support is what allows us to push for a better internet. If you’d like to join the fight, consider becoming an EFF member—you’ll receive special gear as our thanks, and you’ll help power the digital freedom movement. 

And finally, special thanks to the sponsor of this year’s EFF Awards: Electric Capital.

  Catch Up From the Event

Reminder that if you missed the event, you can watch the live recording on our YouTube and the Internet Archive. Plus, a special thank you to our photographers, Alex Schoenfeldt and Carolina Kroon. You can see some of our favorite group photos that were taken during the event, and photos of the awardees with their trophies. 

Christian Romero

EFF, ACLU to SFPD: Stop Illegally Sharing Data With ICE and Anti-Abortion States

3 weeks 1 day ago

The San Francisco Police Department is the latest California law enforcement agency to get caught sharing automated license plate reader (ALPR) data with out-of-state and federal agencies. EFF and the ACLU of Northern California are calling them out for this direct violation of California law, which has put every driver in the city at risk and is especially dangerous for immigrants, abortion seekers, and other targets of the federal government.

This week, we sent the San Francisco Police Department a demand letter and request for records under the city’s Sunshine Ordinance following the SF Standard’s recent report that SFPD provided non-California agencies direct access to the city’s ALPR database. Reporters uncovered that at least 19 searches run by these agencies were marked as related to U.S. Immigration and Customs Enforcement (“ICE”). The city’s ALPR database was also searched by law enforcement agencies from Georgia and Texas, both states with severe restrictions on reproductive healthcare.

ALPRs are cameras that capture the movements of vehicles and upload the location of the vehicles to a searchable, shareable database. It is a mass surveillance technology that collects data indiscriminately on every vehicle on the road. As of September 2025, SFPD operates 415 ALPR cameras purchased from the company Flock Safety.

Since 2016, sharing ALPR data with out-of-state or federal agencies—for any reason—violates California law (SB 34). If this data is shared for the purpose of assisting with immigration enforcement, agencies violate an additional California law (SB 54).

In total, the SF Standard found that SFPD had allowed out-of-state cops to run 1.6 million searches of their data. “This sharing violated state law, as well as exposed sensitive driver location information to misuse by the federal government and by states that lack California’s robust privacy protections,” the letter explained.

EFF and ACLU are urging SFPD to launch a thorough audit of its ALPR database, institute new protocols for compliance, and assess penalties and sanctions for any employee found to be sharing ALPR information out of state.

“Your office reportedly claims that agencies outside of California are no longer able to access the SFPD ALPR database,” the letter says. “However, your office has not explained how outside agencies obtained access in the first place or how you plan to prevent future violations of SB 34 and 54.”

As we’ve demonstrated over and over again, many California agencies continue to ignore these laws, exposing sensitive location information to misuse and putting entire communities at risk. As federal agencies continue to carry out violent ICE raids, and many states enforce harsh, draconian restrictions on abortion, ALPR technology is already being used to target and surveil immigrants and abortion seekers. California agencies, including SFPD, have an obligation to protect the rights of Californians, even when those rights are not recognized by other states or the federal government.

See the full letter here: https://www.eff.org/files/2025/09/17/aclu_and_eff_letter_to_sfpd_9.16.2025-1.pdf

Jennifer Pinsof
Checked
53 minutes 23 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed