令和7年8月の熱中症による救急搬送状況
情報通信審議会 情報通信技術分科会 電波利用環境委員会 CISPR A作業班(第24回)配付資料
「サービス産業動態統計調査」2025年(令和7年)7月分(速報)
【被爆・戦後80年】8・6ヒロシマドキュメント=広島支部8・6取材班
Safeguarding Human Rights Must Be Integral to the ICC Office of the Prosecutor’s Approach to Tech-Enabled Crimes
This is Part I of a two-part series on EFF’s comments to the International Criminal Court Office of the Prosecutor (OTP) about its draft policy on cyber-enabled crimes.
As human rights atrocities around the world unfold in the digital age, genocide, war crimes and crimes against humanity are as heinous and wrongful as they were before the advent of AI and social media.
But criminal methods and evidence increasingly involve technology. Think mass digital surveillance of an ethnic or religious community used to persecute them as part of a widespread or systematic attack against civilians, or cyberattacks that disable hospitals or other essential services, causing injury or death.
The International Criminal Court (ICC) Office of the Prosecutor (OTP) intends to use its mandate and powers to investigate and prosecute cyber-enabled crimes within the court's jurisdiction—those covered under the 1989 Rome Statute treaty. The office released for public comment in March 2025 a draft of its proposed policy for how it plans to go about it.
We welcome the OTP draft and urge the OTP to ensure its approach is consistent with internationally recognized human rights, including the rights to free expression, to privacy (with encryption as a vital safeguard), and to fair trial and due process.
We believe those who use digital tools to commit genocide, crimes against humanity, or war crimes should face justice. At the same time, EFF, along with our partner Derechos Digitales, emphasized in comments submitted to the OTP that safeguarding human rights must be integral to its investigations of cyber-enabled crimes.
That’s how we protect survivors, prevent overreach, gather evidence that can withstand judicial scrutiny, and hold perpetrators to account. In a similar context, we’ve opposed abusive domestic cybercrime laws and policing powers that invite censorship, arbitrary surveillance, and other human rights abuses
In this two-part series, we’ll provide background on the ICC and OTP’s draft policy, including what we like about the policy and areas that raise questions.
OTP Defines Cyber-Enabled Crimes
The ICC, established by the Rome Statute, is the permanent international criminal court with jurisdiction over individuals for four core crimes—genocide, crimes against humanity, war crimes, and the crime of aggression. It also exercises jurisdiction over offences against the administration of justice at the court itself. Within the court, the OTP is an independent organization responsible for investigating these crimes and prosecuting them.
The OTP’s draft policy explains how it will apply the statute when crimes are committed or facilitated by digital means, while emphasizing that ordinary cybercrimes (e.g., hacking, fraud, data theft) are outside ICC jurisdiction and remain the responsibility of national courts to address.
The OTP defines “cyber-enabled crime” as crimes within the court’s jurisdiction that are committed or facilitated by technology. “Committed by” covers cases where the online act is the harmful act (or an essential digital contribution), for example, malware is used to disable a hospital and people are injured or die, so the cyber operation can be the attack itself.
A crime is “facilitated by” technology, according to the OTP draft, when digital activity helps someone commit a crime under modes of liability other than direct commission (e.g., ordering, inducing, aiding or abetting), and it doesn’t matter if the main crime was itself committed online. For example, authorities use mass digital surveillance to locate members of a protected group, enabling arrests and abuses as part of a widespread or systematic attack (i.e., persecution).
It further makes clear that the OTP will use its full investigative powers under the Rome Statute—relying on national authorities acting under domestic law and, where possible, on voluntary cooperation from private entities—to secure digital evidence across borders.
Such investigations can be highly intrusive and risk sweeping up data about people beyond the target. Yet many states’ current investigative practices fall short of international human rights standards. The draft should therefore make clear that cooperating states must meet those standards, including by assessing whether they can conduct surveillance in a manner consistent with the rule of law and the right to privacy.
Digital Conduct as Evidence of Rome Statute Crimes
Even when no ICC crime happens entirely online, the OTP says online activity can still be relevant evidence. Digital conduct can help show intent, context, or policies behind abuses (for example, to prove a persecution campaign), and it can also reveal efforts to hide or exploit crimes (like propaganda). In simple terms, online activity can corroborate patterns, link incidents, and support inferences about motive, policy, and scale relevant to these crimes.
The prosecution of such crimes or the use of related evidence must be consistent with internationally recognized human rights standards, including privacy and freedom of expression, the very freedoms that allow human rights defenders, journalists, and ordinary users to document and share evidence of abuses.
In Part II we’ll take a closer look at the substance of our comments about the policy’s strengths and our recommendations for improvements and more clarity.
大阪・関西万博 あの日工事現場で起きていたこと
異常な組合弾圧、日東電工が労組・支援者をふたたび提訴
小野政美:わだつみ会 第10回オンライン連続講座 のご案内
[B] ウィシュマさん遺族らが裁判報告会を開催 「二度と同じ悲劇を繰り返さないために」【名古屋入管死亡事件】
EFF Statement on TikTok Ownership Deal
One of the reasons we opposed the TikTok "ban" is that the First Amendment is supposed to protect us from government using its power to manipulate speech. But as predicted, the TikTok "ban" has only resulted in turning over the platform to the allies of a president who seems to have no respect for the First Amendment.
TikTok was never proven to be a current national security problem, so it's hard to say the sale will alleviate those unproven concerns. And it remains to be seen if the deal places any limits on the new ownership sharing user data with foreign governments or anyone else—the security concern that purportedly justified the forced sale. As for the algorithm, if the concern had been that TikTok could be a conduit for Chinese government propaganda—a concern the Supreme Court declined to even consider—people can now be concerned that TikTok could be a conduit for U.S. government propaganda. An administration official reportedly has said the new TikTok algorithm will be "retrained" with U.S. data to make sure the system is "behaving properly."
Going Viral vs. Going Dark: Why Extremism Trends and Abortion Content Gets Censored
This is the fourth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
One of the goals of our Stop Censoring Abortion campaign was to put names, stories, and numbers to the experiences we’d been hearing about: people and organizations having their abortion-related content – or entire accounts – removed or suppressed on social media. In reviewing survey submissions, we found that multiple users reported experiencing shadowbanning. Shadowbanning (or “deranking”) is widely experienced and reported by content creators across various social media platforms, and it’s a phenomenon that those who create content about abortion and sexual and reproductive health know all too well.
Shadowbanning is the often silent suppression of certain types of content or creators in your social media feeds. It’s not something that a U.S-based creator is notified about, but rather something they simply find out when their posts stop getting the same level of engagement that they’re used to, or when people are unable to easily find their account using the platform’s search function. Essentially, it is when a platform or its algorithm decides that other users should see less of a creator or specific topic. Many platforms deny that shadowbanning exists; they will often blame reduced reach of posts on ‘bugs’ in the algorithm. At the same time, companies like Meta have admitted that content is ranked, but much about how this ranking system works remains unknown. Meta says that there are five content categories that while allowed on its platforms, “may not be eligible for recommendation.” Content discussing abortion pills may fall under the umbrella of “Content that promotes the use of certain regulated products,” but posts that simply affirm abortion as a valid reproductive decision or are of storytellers sharing their experiences don’t match any of the criteria that would make it unable to be recommended by Meta.
Whether a creator relies on a platform for income or uses it to educate the public, shadowbanning can be devastating for the growth of an account. And this practice often seems to disproportionately affect people who are talking about ‘taboo’ topics like sex, abortion, and LGBTQ+ identities, such as Kim Adamski, a sexual health educator who shared her story with our Stop Censoring Abortion project. As you can see in the images below, Kim’s Instagram account does not show up as a suggestion when being searched, and can only be found after typing in the full username.
Earlier this year, the Center for Intimacy Justice shared their report, "The Digital Gag: Suppression of Sexual and Reproductive Health on Meta, TikTok, Amazon, and Google", which found that of the 159 nonprofits, content creators, sex educators, and businesses surveyed, 63% had content removed on Meta platforms and 55% had content removed on TikTok. This suppression is happening at the same time as platforms continue to allow and elevate videos of violence and gore and extremist hateful content. This pattern is troubling and is only becoming more prevalent as people turn to social media to find the information they need to make decisions about their health.
Reproductive rights and sex education have been under attack across the U.S. for decades. Since the Dobbs v. Jackson decision in 2022, 20 states have banned or limited access to abortion. Meanwhile, 16 states don’t require sex education in public schools to be medically accurate, 19 states have laws that stigmatize LGBTQ+ identities in their sex education curricula, and 17 states specifically stigmatize abortion in their sex education curricula.
In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge.
Online platforms are critical lifelines for people seeking possibly life-saving information about their sexual and reproductive health. We know that when people are unable to find or access the information they need within their communities, they will turn to the internet and social media. This is especially important for abortion-seekers and trans youth living in states where healthcare is being criminalized.
In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge. Limiting access to this information by suppressing the people and organizations who are providing it is an attack on free expression and a profound threat to freedom of information—principles that these platforms claim to uphold. Now more than ever, we must continue to push back against censorship of sexual and reproductive health information so that the internet can still be a place where all voices are heard and where all can learn.
This is the fourth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion