EFF Submission to UN Report on the Role of Media in the Context of Israel’s Policies Toward Palestinians
The UN Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 recently announced a study addressing the killings and attacks against Palestinian journalists and media workers, the destruction of media infrastructure in Gaza, and the production and dissemination of narratives that may enable, justify, or incite international crimes.
As part of this consultation, EFF contributed a submission that identifies a significant deterioration of press freedom and free expression in the period since October 2023, including an increase in censorship and wave of killings of journalists; adding to an already pervasive censorship and surveillance regime for Palestinians.
In particular, concerns raised in our submission relate to:
- Government takedown requests
- Disinformation and content moderation
- Attacks on internet infrastructure
The concerns about censorship in Palestine are ever increasing, and include multiple international forums. Ending the deliberate digital isolation of the Palestinian people is critical to protecting fundamental human rights.
Read the briefing in full here.
情報通信審議会 情報通信技術分科会 IPネットワーク設備委員会 電気通信事業におけるパブリッククラウドシステム利用に関する検討作業班(第3回)
高齢者・障害者向けの新たなICT機器等の研究開発に対する補助金 「デジタル・ディバイド解消のための技術等研究開発推進事業」の補助金交付決定
令和8年に申請を受け付けるBS放送に係る基幹放送局に関する免許方針を定める訓令案等に関する意見募集
2025年度 青少年のインターネット・リテラシー指標等に係る調査結果の公表
令和8年岩手県大槌町の林野火災による被害に係る普通交付税(6月定例交付分)の繰上げ交付
Former EFF Activism Director's New Book, Transaction Denied, Explores What Happens When Financial Companies Act like Censors
A U.S. citizen who teaches Persian poetry classes online is suddenly unable to receive payments or access funds when his account is flagged and frozen by Paypal and its subsidiary Venmo. A Muslim city councilwoman in New York City has a Venmo payment blocked because she uses the name of a Bangladeshi restaurant in the transaction. Online hubs for erotic storytelling repeatedly lose their payment accounts. Others active in drug legalization fights struggle to keep their bank accounts.
These may sound like one-off issues, but they are not. They occur with frightening regularity, as former EFF Activism Director and Chief Program Officer, Rainey Reitman, who left EFF in 2022, describes in her new book, Transaction Denied. The book sheds new light on a serious problem that often hides in the shadows, and pushes us to ask an increasingly important question: “Is it ever OK for financial intermediaries to act as the arbiters of online expression?"
Both a storyteller and an advocate, Rainey exposes hidden systems of power that shape our choices, our speech, and, ultimately, our society. - Cindy Cohn
Reitman makes her case about the impact of financial institutions and payment intermediaries shutting down accounts and inhibiting transactions through compelling individual stories, some of which have not been shared before. The people impacted are diverse: authors, teachers, journalists, elected politicians, and more are suddenly unable to retrieve or receive funds, with little explanation, transparency, or recourse. Reitman shows the reasons are frequently speech-related, resulting often from arbitrary corporate policy, a broad (mis)interpretation of the law, or in response to pressure from anti-speech advocates.
In the example of the Persian poetry teacher, the blocking is due to the highly risk averse interpretation of U.S. sanctions on Iran—sanctions aimed at deterring weapons development or terrorism instead snared a poetry professor and a New York city councilwoman. Reitman demonstrates how these sanctions, and others, have an outsized impact on Muslims.
But Transaction Denied is also a guide for those interested in fighting for free speech. The book covers over a decade of successful campaigns and shows that advocacy can win the day—and is sometimes necessary to counter pro-censorship campaigns. Reitman offers a behind-the-scenes view of the campaign to help restore the Stripe account of the Nifty Archive Alliance, a nonprofit which supports the Nifty Archive, a hub of erotic storytelling for the queer community since 1992. She covers EFF's successful coalition and campaign to restore the PayPal account of Smashwords, a hub for self-published fiction. And in what has become a critical moment for free speech and free press, she describes how several EFF staff members and two EFF board members became the seed for a new nonprofit, the Freedom of the Press Foundation, which continues to partner with EFF today in advancing the rights of journalists.
It’s a banner time for books by EFF staff members and friends. If you're concerned about how online privacy has changed over the last three decades, read EFF Executive Director Cindy Cohn's book, Privacy Defender, released in May. (All proceeds from the sale of hard copies of Privacy’s Defender are being donated to EFF, so your book order will help EFF continue fighting for the principles Cindy holds dear.) If you are worried about the individuals trapped in a system where massive financial companies can shut down their individual accounts, effectively locking up their access to money, based entirely on their speech, grab Transaction Denied, released earlier this month, at Beacon Press, Amazon, and Bookshop.org. (Half of the author proceeds go to Freedom of the Press Foundation.)
More likely—you'll want both books on your shelf. Happy reading!
【おすすめ本】金平茂紀『流れにさからう SNS社会と〝公共〟の融合』―現場で取材し人々の声を聴く「回遊魚」の真骨頂=岩崎 貞明(メディア総合研究所事務局長)
The Open Social Web Needs Section 230 to Survive
If you want to overthrow Big Tech, you’ll need Section 230. The paradigm shift being built with the Open Social Web can put communities back in control of social media infrastructure, and finally end our dependency on enshitified corporate giants. But while these incumbents can overcome multimillion-dollar lawsuits, the small host revolution could be picked off one by one without the protections offered by 230.
The internet as we know it is built on Section 230, a law from the 90s that generally says internet users are legally responsible for their own speech — not the services hosting their speech. The purpose of 230 was to enable diverse forums for speech online, which defined the early internet. These scattered online communities have since been largely captured by a handful of multi-billion dollar companies that found profit in controlling your voice online. While critics are rightly concerned about this new corporate influence and surveillance, some look to diminishing Section 230 as the nuclear option to regain control.
The thing is, that would be a huge gift to Big Tech, and detrimental to our best shot at actually undermining corporate and state control of speech online.
Dethroning Big TechWe’re fed up with legacy social media trapping us in walled gardens, where the world's biggest companies like Google and Meta call the shots. Our communities, and our voices, are being held hostage as billionaires’ platforms surveil, betray, and censor us. We’re not alone in this frustration, and fortunately, people are collaborating globally to build another way forward: the Open Social Web.
This new infrastructure puts the public’s interest first by reclaiming the principles of interoperability and decentralization from the early internet. In short, it puts protocols over platforms and lets people own their connections with others. Whether you choose a Fediverse app like Mastodon or an ATmosphere app like Bluesky, your audience and community stay within reach. It’s a vision of social media akin to our lives offline: you decide who to be in touch with and how, and no central authority can threaten to snuff out those connections. It’s social media for humans, not advertisers and authoritarians.
Behind that vision is a beautiful mess of protocols bringing the open social media web to life. Each protocol is a unique language for applications, determining how and where messages are sent. While this means there is great variety to these projects, it also means everyone who spins up a server, develops an app, or otherwise hosts others’ speech has skin in the game when it comes to defending Section 230.
What exactly is Section 230?Section 230 protects freedom of expression online by protecting US intermediaries that make the internet work. Passed in 1996 to preserve the new bubbling communities online, 230 enshrined important protections for free expression and the ability to block or filter speech you don’t want on your site. One portion is credited as the “26 words that created the internet”:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In other words, this bipartisan law recognizes that speech online relies on intermediaries — services that deliver messages between users — and holding them potentially liable for any message they deliver would only stifle that speech. Intuitively, when harmful speech occurs, the speaker should be the one held accountable. The effect is that most civil suits against users and services based on others' speech can quickly be dismissed, avoiding the most expensive parts of civil litigation.
Section 230 was never a license to host anything online, however. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims.
What Section 230 has enabled, however, is the freedom and flexibility for online communities to self-organize. Without the specter of one bad actor exposing the host(s) to serious legal threats, intermediaries can moderate how they see fit or even defer to volunteers within these communities.
Why the Open Social Web Needs Section 230The superpower of decentralized systems like the Fediverse is the ability for thousands of small hosts to each shoulder some of the burdens of hosting. No single site can assert itself as a necessary intermediary for everyone; instead, all must collaborate to ensure messages reach the intended audience. The result is something superior to any one design or mandate. It is an ecosystem that is greater than the sum of its parts, resilient to disruptions, and free to experiment with different approaches to community governance.
The open social web’s kryptonite though, is the liability participants can face as intermediaries. The greater the potential liability, the more interference from powerful interests in the form of legal threats, more monetary costs, and less space for nuance in moderation. And in practice, participants may simply stop hosting to avoid those risks. The end result is only the biggest and most resourced options can survive.
This isn’t just about the hosts in the Open Social Web, like Mastodon instances or Bluesky PDSes. In the U.S., Section 230’s protections extend to internet users when they distribute another person’s speech. For example, Section 230 protects a user who forwards an email with a defamatory statement. On the open social web, that means when you pass along a message to others through sharing, boosting, and quoting, you’re not liable for the other user’s speech. The alternative would be a web where one misclick could open you up to a defamation lawsuit.
Section 230 also applies to the infrastructure stack, too, like Internet service providers, content delivery networks, domain, and hosting providers. Protections even extend to the new experimental infrastructures of decentralized mesh networks.
Beyond the existential risks to the feasibility of indie decentralized projects in the United States, weakening 230 protections would also make services worse. Being able to customize your social media experience from highly curated to totally laissez-faire in the open social web is only possible when the law allows space for private experiments in moderation approaches. The algorithmically driven firehose forced on users by antiquated social media giants is driven by the financial interests of advertisers, and would only be more tightly controlled in a post-230 world.
Defending 230Laws aimed at changing 230 protections put decentralized projects like the open social web in a uniquely precarious position. That is why we urge lawmakers to take careful consideration of these impacts. It is also why the proponents and builders of a better web must be vigilant defenders of the legal tools that make their work possible.
The open social web embodies what we are protecting with Section 230. It’s our best chance at building a truly democratic public interest internet, where communities are in control.
令和8年春の叙勲
令和8年春の叙勲(消防関係)
Tell Congress: Oppose the GUARD Act
Congress is moving quickly on the GUARD Act, with a key vote expected Thursday. This bill would force AI companies collect sensitive ID or biometric information to verify every user’s age, ban teens from using many everyday digital tools to speak, learn, or ask questions online.
【寄稿】富士を撃つな!富士から撃つな! 弾道・巡航ミサイルの配備許さない 軍拡に強い危機感 柔軟な運動体めざす=望月吉春(「富士にミサイルやめて!の会」事務局長)
40+ Organizations Worldwide Urge Meta to Stop Financially Enabling Settler Violence Against Palestinians
添加物専門調査会(第206回)の開催について【5月7日開催】
RightsCon 2026: Preparatory recommended resources
規制改革推進会議デジタル・AIワーキング・グループ(第9回)
The GUARD Act Isn’t Targeting Dangerous AI—It’s Blocking Everyday Internet Use
Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.
Tell Congress: oppose the guard act
If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.
The concerns behind this bill are serious. There have been troubling reports of AI systems engaging in harmful interactions with young users, including cases involving self-harm. Those risks deserve attention. But they call for targeted solutions, like better safeguards and enforcement against bad actors, not sweeping restrictions. The bill’s sponsors say they’re targeting worst-case scenarios — but the bill regulates everyday use.
The GUARD Act’s Broad Definitions Reach Everyday ToolsThe problem starts with how the bill defines an “AI chatbot.” It covers any system that generates responses that aren’t fully pre-written by the developer or operator. Such a broad definition sweeps in the basic functionality of all AI-powered tools.
Then there’s the definition of an “AI companion,” which minors are banned from using entirely. An AI companion is any chatbot that produces human-like responses and is designed to “encourage or facilitate” interpersonal or emotional interaction. That may sound aimed at simulated “friends” or therapy chatbots. But in practice, it’s much fuzzier.
Modern chatbots are designed to be conversational and helpful. A homework helper might say “good question” before walking a student through a problem. A customer service chatbot may respond empathetically to a complaint (“I’m sorry you’re having this problem.”) A general-purpose assistant might ask follow-up questions. All of these could be seen as facilitating “interpersonal” interaction — and triggering the GUARD Act.
Faced with steep penalties and unclear boundaries, companies are unlikely to take chances on letting young people use their online tools. They’ll block minors entirely or strip their tools down to something less useful for everyone. The result isn’t a narrow safeguard—it’s a broad restriction on everyday online interactions.
Homework Question? Show ID And Call Your ParentsStart with a student getting help with homework. Under the GUARD Act, the service must verify the user’s age using more than a simple checkbox—it must rely on a “reasonable age verification” measure, which could require a government ID or a third-party age-checking system. If the system decides a user is under 18, the company must decide if its tool qualifies as an “AI Companion.” If there’s any risk it does, the safest move is to block access entirely.
The same logic applies to everyday customer service. A teenager trying to fix an order issue gets routed to a chatbot, and the company faces a choice: build a full age-verification system for a routine interaction, or restrict access to avoid liability. Many will choose the latter.
This isn’t a narrow restriction aimed at a few risky products. It’s a compliance regime that pushes companies to block or limit any product that generates text for minors, across the board.
ID Checks for EveryoneThe GUARD Act doesn’t just affect minors. The bill takes a big step towards an internet that only works when users are willing to upload a valid ID or comply with other invasive age-verification schemes. Companies must verify the age of every user—not through a simple self-declaration, but through a “reasonable age verification” system tied to the individual.
In practice, that means collecting sensitive personal information: government IDs, financial data, or biometric identifiers. Companies can outsource verification, but they remain legally responsible. And the law requires ongoing verification, so this isn’t a one-time check. Worse, studies consistently show that millions of people have outdated information on their IDs, such as an old address, or do not have government ID. Should services require ID, many folks without current or any ID will be shut out.
And for those who do have compliant ID, turning over this information repeatedly creates obvious risks. Databases of sensitive identity information become targets for breaches. Anonymous or pseudonymous use of online tools becomes harder or impossible.
To keep minors away from certain chatbots, the GUARD Act would require everyone to prove who they are just to use basic online tools. That’s a steep tradeoff. And it doesn’t actually address the specific harms the bill is supposed to solve.
Vague Definitions, Huge PenaltiesThe GUARD Act’s broad scope is enforced with steep penalties. Companies can face fines of up to $100,000 per violation, enforced by federal and state officials. At the same time, key terms like “AI companion” rely on vague concepts such as “emotional interaction.” That combination will lead to overblocking. Faced with legal uncertainty and serious liability, companies won’t parse small distinctions. They’ll restrict access, limit features, or block minors entirely.
That is the unfortunate result of the GUARD Act, even though the concerns animating it are worthy of fixing. But the GUARD Act’s broad terms will apply far beyond the concerning scenarios.
In the end, that means a more restricted and more surveilled internet. Teenagers would lose access to tools they rely on for school and everyday tasks. Everyone else faces new barriers, including ID checks. Smaller developers, who aren’t able to absorb compliance costs and legal risk, would be pushed out, leaving the largest companies even more dominant.
Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn’t do that. It trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark.
Congress could act soon. Tell them to reject the GUARD Act.
Tell Congress: say no to mandatory online id checks
Congress Has Until April 30 to Take Action on 702. Tell Them Not to Drop The Ball
There are no excuses for any Member of Congress to support a clean reauthorization of Section 702. Anyone who votes to do so does not take your privacy seriously. Full stop.
Section 702 of the Foreign Intelligence Surveillance Act (FISA) is among the United States’ most infamous mass surveillance programs. Sold to the public as a foreign surveillance tool, it has become a backdoor for law enforcement to search through Americans’ private communications without ever obtaining a warrant. We need to act now to prevent Congress from reauthorizing 702 in a way that ignores the truth: This authority needs to change.
House Speaker Mike Johnson has confirmed that “the plan is to move a clean extension of FISA… for at least 18 months.” Our demands are common sense: no renewal without real reforms. A simple extension is a betrayal of every US resident who expects their government to respect their rights and the Constitution.
Your representative needs to hear from you right now, before the April 30 deadline. Contact them today.
Tell them: No vote on any bills that would reauthorize Section 702 without meaningful reform.