❌ How Meta Is Censoring Abortion | EFFector 37.13

2 weeks ago

It's spooky season—but while jump scares may get your heart racing, catching up on digital rights news shouldn't! Our EFFector newsletter has got you covered with easy, bite-sized updates to keep you up-to-date.

In this issue, we spotlight new ALPR-enhanced police drones and how local communities can push back; unpack the ongoing TikTok “ban,” which we’ve consistently said violates the First Amendment; and celebrate a privacy win—abandoning a phone doesn't mean you've also abandoned your privacy rights.

Prefer to listen in? Check out our audio companion, where we interview EFF Staff Attorney Lisa Femia who explains the findings from our investigation into abortion censorship on social media. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.13 - ❌ HOW META IS CENSORING ABORTION

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

EFF Is Standing Up for Federal Employees—Here’s How You Can Stand With Us

2 weeks ago

Federal employees play a key role in safeguarding the civil liberties of millions of Americans. Our rights to privacy and free expression can only survive when we stand together to push back against overreach and ensure that technology serves all people—not just the powerful. 

That’s why EFF jumped to action earlier this year, when the U.S. Office of Personnel Management (OPM) handed over sensitive employee data—Social Security numbers, benefits data, work histories, and more—to Elon Musk’s Department of Government Efficiency (DOGE). This was a blatant violation of the Privacy Act of 1974, and it put federal workers directly at risk. 

We didn’t let it stand. Alongside federal employee unions, EFF sued OPM and DOGE in February. In June, we secured a victory when a judge ruled we were entitled to a preliminary injunction and ordered OPM to provide accounting of DOGE access to employee records. Your support makes this possible. 

Now the fight continues—and your support matters more than ever. The Office of Personnel Management is planting the seeds to undermine and potentially remove the Combined Federal Campaign (CFC), the main program federal employees and retirees have long used to support charities—including EFF. For now, you can still give to EFF through the CFC this year (use our ID: 10437) and we’d appreciate your support! But with the program’s uncertain future, direct support is the best way to keep our work going strong for years to come. 

DONATE TODAY

SUPPORT EFF'S WORK DIRECTLY, BECOME A MEMBER!

When you donate directly, you join a movement of lawyers, activists, and technologists who defend privacy, call out censorship, and push back against abuses of power—everywhere from the courts to Congress and to the streets. As a member, you’ll also receive insider updates, invitations to exclusive events, and receive conversation-starting EFF gear. 

Plus, you can sustain our mission long-term with a monthly or annual donation! 

Stand with EFF. Protect privacy. Defend free expression. Support our work today. 

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Christian Romero

Platforms Have Failed Us on Abortion Content. Here's How They Can Fix It.

2 weeks ago

This is the eighth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

In our Stop Censoring Abortion series, we’ve documented the many ways that reproductive rights advocates have faced arbitrary censorship on Meta platforms. Since social media is the primary—and sometimes the only—way that providers, advocates, and communities can safely and effectively share timely and accurate information about abortion, it’s vitally important that platforms take steps to proactively protect this speech.

Yet, even though Meta says its moderation policies allow abortion-related speech, its enforcement of those policies tells a different story. Posts are being wrongfully flagged, accounts are disappearing without warning, and important information is being removed without clear justification.

So what explains the gap between Meta’s public commitments and its actions? And how can we push platforms to be better—to, dare we say, #StopCensoringAbortion?

After reviewing nearly one-hundred submissions and speaking with Meta to clarify their moderation practices, here’s what we’ve learned.

Platforms’ Editorial Freedom to Moderate User Content

First, given the current landscape—with some states trying to criminalize speech about abortion—you may be wondering how much leeway platforms like Facebook and Instagram have to choose their own content moderation policies. In other words, can social media companies proactively commit to stop censoring abortion?

The answer is yes. Social media companies, including Meta, TikTok, and X, have the constitutionally protected First Amendment right to moderate user content however they see fit. They can take down posts, suspend accounts, or suppress content for virtually any reason.

The Supreme Court explicitly affirmed this right in 2023 in Moody v. Netchoice, holding that social media platforms, like newspapers, bookstores, and art galleries before them, have the First Amendment right to edit the user speech that they host and deliver to other users on their platforms. The Court also established that the government has a very limited role in dictating what social media platforms must (or must not) publish. This editorial discretion, whether granted to individuals, traditional press, or online platforms, is meant to protect these institutions from government interference and to safeguard the diversity of the public sphere—so that important conversations and movements like this one have the space to flourish.

Meta’s Broken Promises

Unfortunately, Meta is failing to meet even these basic standards. Again and again, its policies say one thing while its actual enforcement says another.

Meta has stated its intent to allow conversations about abortion to take place on its platforms. In fact, as we’ve written previously in this series, Meta has publicly insisted that posts with educational content about abortion access should not be censored, even admitting in several public statements to moderation mistakes and over-enforcement. One spokesperson told the New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space. . . . That’s why we allow posts and ads about, discussing and debating abortion.”

Meta’s platform policies largely reflect this intent. But as our campaign reveals, Meta’s enforcement of those policies is wildly inconsistent. Time and again, users—including advocacy organizations, healthcare providers, and individuals sharing personal stories—have had their content taken down even though it did not actually violate any of Meta’s stated guidelines. Worse, they are often left in the dark about what happened and how to fix it.

Arbitrary enforcement like this harms abortion activists and providers by cutting them off from their audiences, wasting the effort they spend creating resources and building community on these platforms, and silencing their vital reproductive rights advocacy. And it goes without saying that it hurts users, who need access to timely, accurate, and sometimes life-saving information. At a time when abortion rights are under attack, platforms with enormous resources—like Meta—have no excuse for silencing this important speech.  

Our Call to Platforms

Our case studies have highlighted that when users can’t rely on platforms to apply their own rules fairly, the result is a widespread chilling effect on online speech. That’s why we are calling on Meta to adopt the following urgent changes.

1. Publish clear and understandable policies.

Too often, platforms’ vague rules force users to guess what content might be flagged in order to avoid shadowbanning or worse, leading to needless self-censorship. To prevent this chilling effect, platforms should strive to offer users the greatest possible transparency and clarity on their policies. The policies should be clear enough that users know exactly what is allowed and what isn’t so that, for example, no one is left wondering how exactly a clip of women sharing their abortion experiences could be mislabeled as violent extremism.

2. Enforce rules consistently and fairly.

If content doesn’t violate a platform’s stated policies, it should not be removed. And, per Meta’s own policies, an account should not be suspended for abortion-related content violations if it has not received any prior warnings or “strikes.” Yet as we’ve seen throughout this campaign, abortion advocates repeatedly face takedowns or even account suspensions of posts that fall entirely within Meta’s Community Standards. On such a massive scale, this selective enforcement erodes trust and chills entire communities from participating in critical conversations. 

3. Provide meaningful transparency in enforcement actions.

When content is removed, Meta tends to give vague, boilerplate explanations—or none at all. Instead, users facing takedowns or suspensions deserve detailed and accurate explanations that state the policy violated, reflect the reasoning behind the actual enforcement decision, and ways to appeal the decision. Clear explanations are key to preventing wrongful censorship and ensuring that platforms remain accountable to their commitments and to their users.

4. Guarantee functional appeals.

Every user deserves a real chance to challenge improper enforcement decisions and have them reversed. But based on our survey responses, it seems Meta’s appeals process is broken. Many users reported that they do not receive responses to appeals, even when the content did not violate Meta’s policies, and thus have no meaningful way to challenge takedowns. Alarmingly, we found that a user’s best (and sometimes only) chance at success is to rely on a personal connection at Meta to right wrongs and restore content. This is unacceptable. Users should have a reliable and efficient appeal process that does not depend on insider access.   

5. Expand human review.

Finally, automated systems cannot always handle the nuance of sensitive issues like reproductive health and advocacy. They misinterpret words, miss important cultural or political context, and wrongly flag legitimate advocacy as “dangerous.” Therefore, we call upon platforms to expand the role that human moderators play in reviewing auto-flagged content violations—especially when posts involve sensitive healthcare information or political expression.

Users Deserve Better

Meta has already made the choice to allow speech about abortion on its platforms, and it has not hesitated to highlight that commitment whenever it has faced scrutiny. Now it’s time for Meta to put its money where its mouth is.

Users deserve better than a system where rules are applied at random, appeals go nowhere, and vital reproductive health information is needlessly (or negligently) silenced. If Meta truly values free speech, it must commit to moderating with fairness, transparency, and accountability.

This is the eighth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion   

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Molly Buckley

【おすすめ本】山田 健太『転がる石のように 揺れるジャーナリズムと軋む表現の自由』―表面の時流に流されず 現場から説く鋭い定点時評=藤森 研(JCJ代表委員)

2 weeks ago
 戦後80年。日本の言論状況はどう変遷してきたのか。 「約20年ごとに、構築期・躍動期・挟撃期(権力と市民双方からのメディア攻撃)・忖度期にま とめられる」と、著者は さらりと書く。テレビ誕生、ベトナム報道、報道 の人権侵害、安倍一強……。思えば、なるほどと 頷(うなず)かされる。 本書は、琉球新報と東京新聞に載った、著者の時評の2020年以降分をまとめたもので、それ以前は既に二冊の本となって世に出ている。長期間、たゆまず現場を見続けてきた「定点観測」か ら紡ぎ出す論評は、優..
JCJ

【おすすめ本】 小林美穂子・小松田健一『桐生市事件 生活保護が歪められた街で 』―「命の砦」を守る闘いの記録=白井康彦(フリージャーナリスト)

2 weeks ago
 暴かれた「強者の闇」 を単行本にまとめて歴史的資料にする。ものの見事に実現した労作だ。生活保護制度は紛れもなく「命の砦」。ところが世 間には「なまけて生活保護を利用している人が多い」といった誤解が広がっている。 誤解をさらに強めたのが、2012年に民放テレビや週刊誌などが展開した「生活保護バッシング報道」。これによって「生活保護を利用しようとしている人には厳しく接して、なるべく利用させないようにすべきだ」と考える自治体担当者は一段と増えた。 その考え方の「行きついた先」に思..
JCJ

[B] 中学校始まる!~ チャオ!イタリア通信

2 weeks 1 day ago
9月から、我が家の双子が中学校に通い始めました。小学校5年間、長かったような短かったような、、。中学校は3年間なので、あっと言う間に過ぎるよ、と上に兄弟がいる同級生のお母さんから言われました。(サトウノリコ=イタリア在住)
日刊ベリタ

Gate Crashing: An Interview Series

2 weeks 1 day ago

There is a lot of bad on the internet and it seems to only be getting worse. But one of the things the internet did well, and is worth preserving, is nontraditional paths for creativity, journalism, and criticism. As governments and major corporations throw up more barriers to expression—and more and more gatekeepers try to control the internet—it’s important to learn how to crash through those gates. 

In EFF's interview series, Gate Crashing, we talk to people who have used the internet to take nontraditional paths to the very traditional worlds of journalism, creativity, and criticism. We hope it's both inspiring to see these people and enlightening for anyone trying to find voices they like online.  

Our mini-series will be dropping an episode each month closing out 2025 in style.

  • Episode 1: Fanfiction Becomes Mainstream – Launching October 1*
  • Episode 2: From DIY to Publishing – Launching November 1
  • Episode 3: A New Path for Journalism – Launching December 1

Be sure to mark your calendar or check our socials on drop dates. If you have a friend or colleague that might be interested in watching our series, please forward this link: eff.org/gatecrashing

Check Out Episode 1

For over 35 years, EFF members have empowered attorneys, activists, and technologists to defend civil liberties and human rights online for everyone.

Tech should be a tool for the people, and we need you in this fight.

Donate Today


* This interview was originally published in December 2024. No changes have been made

Katharine Trendacosta

Weekly Report: Cisco ASAおよびFTDにおける複数の脆弱性(CVE-2025-20333、CVE-2025-20362)に関する注意喚起

2 weeks 1 day ago
Cisco Adaptive Security Appliance(ASA)およびFirewall Threat Defense(FTD)には、複数の脆弱性があります。これらの脆弱性の悪用を開発元は確認しているとのことです。この問題は、当該製品を修正済みのバージョンに更新することで解決します。詳細は、開発者が提供する情報を参照してください。

Wave of Phony News Quotes Affects Everyone—Including EFF

2 weeks 1 day ago

Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims. 

WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI Training,” containing this passage:  winbuzzer_june_26.png

That quotation from EFF’s Corynne McSherry was cited again in two subsequent, related stories by the same journalist—one published July 27, the other August 27

But the link in that original June 26 post was fake. Corynne McSherry never wrote such an article, and the quote was bogus. 

Interestingly, we noted a similar issue with a June 13 post by the same journalist, in which he cited work by EFF Director of Cybersecurity Eva Galperin; this quote included the phrase “get-out-of-jail-free card” too. 

winbuzzer_june_13.png

Again, the link he inserted leads nowhere because Eva Galperin never wrote such a blog or white paper.  

When EFF reached out, the journalist—WinBuzzer founder and editor-in-chief Markus Kasanmascheff—acknowledged via email that the quotes were bogus. 

“This indeed must be a case of AI slop. We are using AI tools for research/source analysis/citations. I sincerely apologize for that and this is not the content quality we are aiming for,” he wrote. “I myself have noticed that in the particular case of the EFF for whatever reason non-existing quotes are manufactured. This usually does not happen and I have taken the necessary measures to avoid this in the future. Every single citation and source mention must always be double checked. I have been doing this already but obviously not to the required level. 

“I am actually manually editing each article and using AI for some helping tasks. I must have relied too much on it,” he added. 

AI slop abounds 

It’s not an isolated incident. Media companies large and small are using AI to generate news content because it’s cheaper than paying for journalists’ salaries, but that savings can come at the cost of the outlets’ reputations.  

The U.K.’s Press Gazette reported last month that Wired and Business Insider had to remove news features written by one freelance journalist after concerns the articles are likely AI-generated works of fiction: “Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.” 

And back in May, the Chicago Sun-Times had to apologize after publishing an AI-generated list of books that would make good summer reads—with 10 of the 15 recommended book descriptions and titles found to be “false, or invented out of whole cloth.” 

As journalist Peter Sterne wrote for Nieman Lab in 2022: 

Another potential risk of relying on large language models to write news articles is the potential for the AI to insert fake quotes. Since the AI is not bound by the same ethical standards as a human journalist, it may include quotes from sources that do not actually exist, or even attribute fake quotes to real people. This could lead to false or misleading reporting, which could damage the credibility of the news organization. It will be important for journalists and newsrooms to carefully fact check any articles written with the help of AI, to ensure the accuracy and integrity of their reporting. 

(Or did he write that? Sterne disclosed in that article that he used OpenAI’s ChatGPT-3 to generate that paragraph, ironically enough.) 

The Radio Television Digital News Association issued guidelines a few years ago for the use of AI in journalism, and the Associated Press is among many outlets that have developed guidelines of their own. The Poynter Institute offers a template for developing such policies.  

Nonetheless, some journalists or media outlets have been caught using AI to generate stories including fake quotes; for example, the Associated Press reported last year that a Wyoming newspaper reporter had filed at least seven stories that included AI-generated quotations from six people.  

WinBuzzer wasn’t the only outlet to falsely quote EFF this year. An April 19 article in Wander contained another bogus quotation from Eva Galperin: 

April 19 Wander clipping with fake quote from Eva Galperin

An email to the outlet demanding the article’s retraction went unanswered. 

In another case, WebProNews published a July 24 article quoting Eva Galperin under the headline “Risika Data Breach Exposes 100M Swedish Records to Fraud Risks,” but Eva confirmed she’d never spoken with them or given that quotation to anyone. The article no longer seems to exist on the outlet’s own website, but it was captured by the Internet Archive’s Wayback Machine

07-24-2025_webpronews_screenshot.png  

A request for comment made through WebProNews’ “Contact Us” page went unanswered, and then they did it again on September 2, this time misattributing a statement to Corynne McSherry: 

09-02-2025_webpronews_corynne_mcsherry.png
No such article in The Verge seems to exist, and the statement is not at all in line with EFF’s stance. 

Our most egregious example 

The top prize for audacious falsity goes to a June 18 article in the Arabian Post, since removed from the site after we flagged it to an editor. The Arabian Post is part of the Hyphen Digital Network, which describes itself as “at the forefront of AI innovation” and offering “software solutions that streamline workflows to focus on what matters most: insightful storytelling.” The article in question included this passage: 

Privacy advocate Linh Nguyen from the Electronic Frontier Foundation remarked that community monitoring tools are playing a civic role, though she warned of the potential for misinformation. “Crowdsourced neighbourhood policing walks a thin line—useful in forcing transparency, but also vulnerable to misidentification and fear-mongering,” she noted in a discussion on digital civil rights. 

muck_rack_june_19_-_arabian_post.png

Nobody at EFF recalls anyone named Linh Nguyen ever having worked here, nor have we been able to find anyone by that name who works in the digital privacy sector. So not only was the quotation fake, but apparently the purported source was, too.  

Now, EFF is all about having our words spread far and wide. Per our copyright policy, any and all original material on the EFF website may be freely distributed at will under the Creative Commons Attribution 4.0 International License (CC-BY), unless otherwise noted. 

But we don't want AI and/or disreputable media outlets making up words for us. False quotations that misstate our positions damage the trust that the public and more reputable media outlets have in us. 

If you're worried about this (and rightfully so), the best thing a news consumer can do is invest a little time and energy to learn how to discern the real from the fake. It’s unfortunate that it's the public’s burden to put in this much effort, but while we're adjusting to new tools and a new normal, a little effort now can go a long way.  

As we’ve noted before in the context of election misinformation, the nonprofit journalism organization ProPublica has published a handy guide about how to tell if what you’re reading is accurate or “fake news.” And the International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends: 

how_to_spot_fake_news.jpg

Josh Richman