APC celebrates activist Alaa Abd el-Fattah's release

3 days 19 hours ago
We are thrilled to learn of the long-awaited release of activist Alaa Abd el-Fattah from unjust detention. Our courageous friend and colleague, Egyptian writer, activist and technologist is among the…
APCNews

Meta is Removing Abortion Advocates' Accounts Without Warning

3 days 21 hours ago

This is the fifth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When the team at Women Help Women signed into Instagram last winter, they were met with a distressing surprise: without warning, Meta had disabled their account. The abortion advocacy non-profit organization found itself suddenly cut off from its tens of thousands of followers and with limited recourse. Meta claimed Women Help Women had violated its Community Standards on “guns, drugs, and other restricted goods,” but the organization told EFF it uses Instagram only to communicate about safe abortion practices, including sharing educational content and messages aimed at reducing stigma. Eventually, Women Help Women was able to restore its account—but only after launching a public campaign and receiving national news coverage

Unfortunately, Women Help Women’s experience is not unique. Around a quarter of our Stop Censoring Abortion campaign submissions reported that their entire account or page had been disabled or taken down after sharing abortion information—primarily on Meta platforms. This troubling pattern indicates that the censorship crisis goes beyond content removal. Accounts providing crucial reproductive health information are disappearing, often without warning, cutting users off from their communities and followers entirely.

whw_screenshot.jpeg

What's worse, Meta appears to be imposing these negative account actions without clearly adhering to its own enforcement policies. Meta’s own Transparency Center stipulates that an account should receive multiple Community Standards violations or warnings before it is restricted or disabled. Yet many affected users told EFF they experienced negative account actions without any warning at all, or after only one alleged violation (many of which were incorrectly flagged, as we’ve explained elsewhere in this series). 

While Meta clearly has the right to remove accounts from its platforms, disabling or banning an account is an extreme measure. It completely silences a user, cutting off communication with their followers and preventing them from sharing any information, let alone abortion information. Because of this severity, Meta should be extremely careful to ensure fairness and accuracy when disabling or removing accounts. Rules governing account removal should be transparent and easy to understand, and Meta must enforce these policies consistently across different users and categories of content. But as our Stop Censoring Abortion results demonstrate, this isn't happening for many accounts sharing abortion information.  

Meta's Maze of Enforcement Policies 

If you navigate to Meta’s Transparency Center, you’ll find a page titled “How Meta enforces its policies.” This page contains a web of intersecting policies on when Meta will restrict accounts, disable accounts, and remove pages and groups. These policies overlap but don’t directly refer to each other, making it trickier for users to piece together how enforcement happens. 

At the heart of Meta's enforcement process is a strike system. Users receive strikes for posting content that violates Meta’s Community Standards. But not all Community Standards violations result in strikes, and whether Meta applies one depends on the “severity of the content” and the “context in which it was shared.” Meta provides little additional guidance on what violations are severe enough to amount to a strike or how context affects this assessment.  

According to Meta's Restricting Accounts policy, for most violations, 1 strike should only result in a warning—not any action against the account. How additional strikes affect an account differs between Facebook and Instagram (but Meta provides no specific guidance for Threads). Facebook relies on a progressive system, where additional strikes lead to increasing restrictions. Enforcement on Instagram is more opaque and leaves more to Meta’s discretion. Meta still counts strikes on Instagram, but it does not follow the same escalating structure of restrictions as it does on Facebook. 

Despite some vagueness in these policies, Meta is quite clear about one thing: On both Facebook and Instagram, an account should only be disabled or removed after “repeated” violations, warnings, or strikes. Meta states this multiple times throughout its enforcement policies. Its Disabling Accounts policy suggests that generally, an account needs to receive at least 5 strikes for Meta to disable or remove it from the platform. The only caveat is for severe violations, such as posting child sexual exploitation content or violating the dangerous individuals and organizations policy. In those extreme cases, Meta may disable an account after just one violation. 

Meta’s Practices Don’t Match Its Policies 

Our survey results detailed a different reality. Many survey respondents told EFF that Meta disabled or removed their account without warning and without indication that they had received repeated strikes.  It’s important to note that Meta does not have a unique enforcement process for prescription drug or abortion-related content. When EFF asked Meta about this issue, Meta confirmed that "enforcement actions on prescription drugs are subject to Meta's standard enforcement policies.” 

So here are a couple other possible explanations for this disconnect—each of them troubling in their own way:

Meta is Ignoring Its Own Strike System 

If Meta is taking down accounts without warning or after only one alleged Community Standards violation, the company is failing to follow its own strike system. This makes enforcement arbitrary and denies users the opportunity for correction that Meta's system supposedly provides. It’s also especially problematic for abortion advocates, given that Meta has been incorrectly flagging educational abortion content as violating its Community Standards. This means that a single content moderation error could result not only in the post coming down, but the entire account too.  

This may be what happened to Emory University’s RISE Center for Reproductive Health Research (a story we described in more detail earlier in this series). After sharing an educational post about mifepristone, RISE’s Instagram account was suddenly disabled. RISE received no earlier warnings from Meta before its account went dark. When RISE was finally able to get back into its account, it discovered only that this single post had been flagged. Again, according to Meta's own policies, one strike should only result in a warning. But this isn’t what happened here. 

Similarly, the Tamtang Foundation, an abortion advocacy organization based in Thailand, had its Facebook account suddenly disabled earlier this year. Tamtang told EFF it had received a warning on only one flagged post that it had posted 10 months prior to its account being taken down. It received none of the other progressive strike restrictions Meta claims to apply Facebook accounts. 

tamtang_screenshot.jpg

Meta is Misclassifying Educational Content as "Extreme Violations" 

If Meta is accurately following its strike policy but still disabling accounts after only one violation, this points to an even more concerning possibility. Meta’s content moderation system may be categorizing educational abortion information as severe enough to warrant immediate disabling, treating university research posts and clinic educational materials as equivalent to child exploitation or terrorist content.  

This would be a fundamental and dangerous mischaracterization of legitimate medical information, and it is, we hope, unlikely. But it’s unfortunately not outside the realm of possibility. We already wrote about a similar disturbing mischaracterization earlier in this series. 

Users Are Unknowingly Receiving Multiple Strikes 

Finally, Meta may be giving users multiple strikes without notifying them. This raises several serious concerns.

First is the lack of transparency. Meta explicitly states in its "Restricting Accounts" policy that it will notify users when it “remove[s] your content or add[s] restrictions to your account, Page or group.” This policy is failing if users are not receiving these notifications and are not made aware there’s an issue with their account. 

It may also mean that Meta’s policies themselves are too vague to provide meaningful guidance to users. This lack of clarity is harmful. If users don’t know what's happening to their accounts, they can’t appeal Meta’s content moderation decisions, adjust their content, or understand Meta's enforcement boundaries moving forward. 

Finally—and most troubling—if Meta is indeed disabling accounts that share abortion information for receiving multiple violations, this points to an even broader censorship crisis. Users may not be aware just how many informational abortion-related posts are being incorrectly flagged and counted as strikes. This is especially concerning given that Meta places a one-year time limit on strikes, meaning the multiple alleged violations could not have accumulated over multiple years.  

The Broader Censorship Crisis 

These account suspensions represent just one facet of Meta's censorship of reproductive health information documented by our Stop Censoring Abortion campaign. When combined with post removals, shadowbanning, and content restrictions, the message is clear: Meta platforms are increasingly unfriendly environments for abortion advocacy and education. 

If Meta wants to practice what it preaches, then it must reform its enforcement policies to provide clear, transparent guidelines on when and how strikes apply, and then consistently and accurately apply those policies. Accounts should not be taken down for only one alleged violation when the policies state otherwise.  

The stakes couldn't be higher. In a post-Roe landscape where access to accurate reproductive health information is more crucial than ever, Meta's enforcement system is silencing the very voices communities need most. 

This is the fifth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion  

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Lisa Femia

Governor Newsom Should Make it Easier to Exercise Our Privacy Rights

3 days 23 hours ago

California has one of the nation’s most comprehensive consumer data privacy laws. But it’s not always easy for people to exercise those privacy rights. That’s why we supported Assemblymember Josh Lowenthal’s A.B. 566 throughout the legislative session and are now asking California Governor Gavin Newsom to sign it into law. 

The easier it is to exercise your rights, the more power you have.  

A.B. 566 does a very simple thing. It directs browsers—such as Google’s Chrome, Apple’s Safari, Microsoft’s Edge or Mozilla’s Firefox—to give all their users the option to tell companies they don't want companies to  to sell or share personal information  that’s collected about them on the internet. In other words: it makes it easy for Californians to tell companies what they want to happen with their own information.

By making it easy to use tools that allow you to send these sorts of signals to companies’ websites, A.B. 566 makes the California Consumer Privacy Act more user-friendly. And the easier it is to exercise your rights, the more power you have.  

This is a necessary step, because even though the CCPA gives all people in California the right to tell companies not to sell or share their personal information, companies have not made it easy to exercise this right. Right now, someone who wants to make these requests has to go through the processes set up by each company that may collect their information individually. Companies have also often made it pretty hard to make, or even find out how to make, these requests. Giving people the option for an easier way to communicate how they want companies to treat their personal information helps rebalance the often-lopsided relationship between the two.

Industry groups who want to keep the scales tipped firmly in the favor of corporations have lobbied heavily against A.B. 566. But we urge Gov. Newsom not to listen to those who want to it to remain difficult for people to exercise their CCPA rights. EFF’s technologists, lawyers, and advocates think A.B. 566 empowers consumers without imposing regulations that would limit innovation. We think Californians should have easy tools to tell companies how to deal with their information, and urge Gov. Newsom to sign this bill. 

Hayley Tsukayama

【被爆・戦後80年】8・6ヒロシマドキュメント=広島支部8・6取材班

4 days 3 hours ago
           原爆ドーム前でアピールするJCJメンバーたち 広島「平和記念式典」は8月6日、平和公園の各入場口に通じる道路で警察が厳戒態勢を敷き、市が昨年に続き「公園全域入場規制」を強行する中で行われた。JCJ広島支部は今年も「8・6取材班」を編成。式典当日の「会場」内外を取材した。平和公園に持ち物検査を受けて入場し、核兵器廃絶などを訴えるボードを掲げる初めてのアピールにも取り組んだ。 午前7時30分、支部メンバーと市民計10人は、歩道を占拠する集団や大音量が響く原爆..
JCJ

Safeguarding Human Rights Must Be Integral to the ICC Office of the Prosecutor’s Approach to Tech-Enabled Crimes

4 days 3 hours ago

This is Part I of a two-part series on EFF’s comments to the International Criminal Court Office of the Prosecutor (OTP) about its draft policy on cyber-enabled crimes.

As human rights atrocities around the world unfold in the digital age, genocide, war crimes and crimes against humanity are as heinous and wrongful as they were before the advent of AI and social media.

But criminal methods and evidence increasingly involve technology. Think mass digital surveillance of an ethnic or religious community used to persecute them as part of a widespread or systematic attack against civilians, or cyberattacks that disable hospitals or other essential services, causing injury or death.

The International Criminal Court (ICC) Office of the Prosecutor (OTP) intends to use its mandate and powers to investigate and prosecute cyber-enabled crimes within the court's jurisdiction—those covered under the 1989 Rome Statute treaty. The office released for public comment in March 2025 a draft of its proposed policy for how it plans to go about it.

We welcome the OTP draft and urge the OTP to ensure its approach is consistent with internationally recognized human rights, including the rights to free expression, to privacy (with encryption as a vital safeguard), and to fair trial and due process.

We believe those who use digital tools to commit genocide, crimes against humanity, or war crimes should face justice. At the same time, EFF, along with our partner Derechos Digitales, emphasized in comments submitted to the OTP that safeguarding human rights must be integral to its investigations of cyber-enabled crimes.

That’s how we protect survivors, prevent overreach, gather evidence that can withstand judicial scrutiny, and hold perpetrators to account. In a similar context, we’ve opposed abusive domestic cybercrime laws and policing powers that invite censorship, arbitrary surveillance, and other human rights abuses

In this two-part series, we’ll provide background on the ICC and OTP’s draft policy, including what we like about the policy and areas that raise questions.

OTP Defines Cyber-Enabled Crimes

The ICC, established by the Rome Statute, is the permanent international criminal court with jurisdiction over individuals for four core crimes—genocide, crimes against humanity, war crimes, and the crime of aggression. It also exercises jurisdiction over offences against the administration of justice at the court itself. Within the court, the OTP is an independent organization responsible for investigating these crimes and prosecuting them.

The OTP’s draft policy explains how it will apply the statute when crimes are committed or facilitated by digital means, while emphasizing that ordinary cybercrimes (e.g., hacking, fraud, data theft) are outside ICC jurisdiction and remain the responsibility of national courts to address.

The OTP defines “cyber-enabled crime” as crimes within the court’s jurisdiction that are committed or facilitated by technology. “Committed by” covers cases where the online act is the harmful act (or an essential digital contribution), for example, malware is used to disable a hospital and people are injured or die, so the cyber operation can be the attack itself.

A crime is “facilitated by” technology, according to the OTP draft, when digital activity helps someone commit a crime under modes of liability other than direct commission (e.g., ordering, inducing, aiding or abetting), and it doesn’t matter if the main crime was itself committed online. For example, authorities use mass digital surveillance to locate members of a protected group, enabling arrests and abuses as part of a widespread or systematic attack (i.e., persecution).

It further makes clear that the OTP will use its full investigative powers under the Rome Statute—relying on national authorities acting under domestic law and, where possible, on voluntary cooperation from private entities—to secure digital evidence across borders.

Such investigations can be highly intrusive and risk sweeping up data about people beyond the target. Yet many states’ current investigative practices fall short of international human rights standards. The draft should therefore make clear that cooperating states must meet those standards, including by assessing whether they can conduct surveillance in a manner consistent with the rule of law and the right to privacy.

Digital Conduct as Evidence of Rome Statute Crimes

Even when no ICC crime happens entirely online, the OTP says online activity can still be relevant evidence. Digital conduct can help show intent, context, or policies behind abuses (for example, to prove a persecution campaign), and it can also reveal efforts to hide or exploit crimes (like propaganda). In simple terms, online activity can corroborate patterns, link incidents, and support inferences about motive, policy, and scale relevant to these crimes.

The prosecution of such crimes or the use of related evidence must be consistent with internationally recognized human rights standards, including privacy and freedom of expression, the very freedoms that allow human rights defenders, journalists, and ordinary users to document and share evidence of abuses.

In Part II we’ll take a closer look at the substance of our comments about the policy’s strengths and our recommendations for improvements and more clarity.

Karen Gullo

[B] ウィシュマさん遺族らが裁判報告会を開催 「二度と同じ悲劇を繰り返さないために」【名古屋入管死亡事件】

4 days 21 hours ago
2021年3月に名古屋出入国在留管理局の収容施設で亡くなったスリランカ国籍のウィシュマ・サンダマリさん。遺族は、名古屋入管が衰弱していくウィシュマさんに対して適切な措置を取らなかったことが死亡の原因として、2022年3月に国家賠償請求訴訟を提起した。12月には事件に関与した医師など4人の証人尋問が行われる予定で、裁判はいよいよ重要な局面を迎える。 こうした中、遺族は14日、東京都内で事件の経緯や裁判の経過について説明する報告会を開いた。妹のポールニマさんのほか、遺族側弁護団の指宿昭一弁護士や支援団体「BOND」のメンバーが登壇し、参加した市民に向けて思いを語った。(岩中健介)
日刊ベリタ

EFF Statement on TikTok Ownership Deal

4 days 21 hours ago

One of the reasons we opposed the TikTok "ban" is that the First Amendment is supposed to protect us from government using its power to manipulate speech. But as predicted, the TikTok "ban" has only resulted in turning over the platform to the allies of a president who seems to have no respect for the First Amendment.

TikTok was never proven to be a current national security problem, so it's hard to say the sale will alleviate those unproven concerns. And it remains to be seen if the deal places any limits on the new ownership sharing user data with foreign governments or anyone else—the security concern that purportedly justified the forced sale. As for the algorithm, if the concern had been that TikTok could be a conduit for Chinese government propaganda—a concern the Supreme Court declined to even consider—people can now be concerned that TikTok could be a conduit for U.S. government propaganda. An administration official reportedly has said the new TikTok algorithm will be "retrained" with U.S. data to make sure the system is "behaving properly."

David Greene

Going Viral vs. Going Dark: Why Extremism Trends and Abortion Content Gets Censored

4 days 23 hours ago

This is the fourth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

One of the goals of our Stop Censoring Abortion campaign was to put names, stories, and numbers to the experiences we’d been hearing about: people and organizations having their abortion-related content – or entire accounts – removed or suppressed on social media. In reviewing survey submissions, we found that multiple users reported experiencing shadowbanning. Shadowbanning (or “deranking”) is widely experienced and reported by content creators across various social media platforms, and it’s a phenomenon that those who create content about abortion and sexual and reproductive health know all too well.

Shadowbanning is the often silent suppression of certain types of content or creators in your social media feeds. It’s not something that a U.S-based creator is notified about, but rather something they simply find out when their posts stop getting the same level of engagement that they’re used to, or when people are unable to easily find their account using the platform’s search function. Essentially, it is when a platform or its algorithm decides that other users should see less of a creator or specific topic. Many platforms deny that shadowbanning exists; they will often blame reduced reach of posts on ‘bugs’ in the algorithm. At the same time, companies like Meta have admitted that content is ranked, but much about how this ranking system works remains unknown.  Meta says that there are five content categories that while allowed on its platforms, “may not be eligible for recommendation.” Content discussing abortion pills may fall under the umbrella of “Content that promotes the use of certain regulated products,” but posts that simply affirm abortion as a valid reproductive decision or are of storytellers sharing their experiences don’t match any of the criteria that would make it unable to be recommended by Meta.

Whether a creator relies on a platform for income or uses it to educate the public, shadowbanning can be devastating for the growth of an account. And this practice often seems to disproportionately affect people who are talking about ‘taboo’ topics like sex, abortion, and LGBTQ+ identities, such as Kim Adamski, a sexual health educator who shared her story with our Stop Censoring Abortion project. As you can see in the images below, Kim’s Instagram account does not show up as a suggestion when being searched, and can only be found after typing in the full username.


Earlier this year, the Center for Intimacy Justice shared their report, "The Digital Gag: Suppression of Sexual and Reproductive Health on Meta, TikTok, Amazon, and Google", which found that of the 159 nonprofits, content creators, sex educators, and businesses surveyed, 63% had content removed on Meta platforms and 55% had content removed on TikTok. This suppression is happening at the same time as platforms continue to allow and elevate videos of violence and gore and extremist hateful content. This pattern is troubling and is only becoming more prevalent as people turn to social media to find the information they need to make decisions about their health.

Reproductive rights and sex education have been under attack across the U.S. for decades. Since the Dobbs v. Jackson decision in 2022, 20 states have banned or limited access to abortion. Meanwhile, 16 states don’t require sex education in public schools to be medically accurate, 19 states have laws that stigmatize LGBTQ+ identities in their sex education curricula, and 17 states specifically stigmatize abortion in their sex education curricula.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge.

Online platforms are critical lifelines for people seeking possibly life-saving information about their sexual and reproductive health. We know that when people are unable to find or access the information they need within their communities, they will turn to the internet and social media. This is especially important for abortion-seekers and trans youth living in states where healthcare is being criminalized.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge. Limiting access to this information by suppressing the people and organizations who are providing it is an attack on free expression and a profound threat to freedom of information—principles that these platforms claim to uphold. Now more than ever, we must continue to push back against censorship of sexual and reproductive health information so that the internet can still be a place where all voices are heard and where all can learn.

This is the fourth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

Kenyatta Thomas