日中韓自由貿易協定(FTA)交渉の第10 回交渉会合(局長/局次長会合)が開催されます
「活力あふれる『ビンテージ・ソサエティ』の実現に向けて」(研究会報告書)をとりまとめました
自動走行との連携が期待される、地図情報に関する国際規格が発行されました
東京電力株式会社の会社分割について、電気事業法に基づき認可しました
The Open Social Web Needs Section 230 to Survive
If you want to overthrow Big Tech, you’ll need Section 230. The paradigm shift being built with the Open Social Web can put communities back in control of social media infrastructure, and finally end our dependency on enshitified corporate giants. But while these incumbents can overcome multimillion-dollar lawsuits, the small host revolution could be picked off one by one without the protections offered by 230.
The internet as we know it is built on Section 230, a law from the 90s that generally says internet users are legally responsible for their own speech — not the services hosting their speech. The purpose of 230 was to enable diverse forums for speech online, which defined the early internet. These scattered online communities have since been largely captured by a handful of multi-billion dollar companies that found profit in controlling your voice online. While critics are rightly concerned about this new corporate influence and surveillance, some look to diminishing Section 230 as the nuclear option to regain control.
The thing is, that would be a huge gift to Big Tech, and detrimental to our best shot at actually undermining corporate and state control of speech online.
Dethroning Big TechWe’re fed up with legacy social media trapping us in walled gardens, where the world's biggest companies like Google and Meta call the shots. Our communities, and our voices, are being held hostage as billionaires’ platforms surveil, betray, and censor us. We’re not alone in this frustration, and fortunately, people are collaborating globally to build another way forward: the Open Social Web.
This new infrastructure puts the public’s interest first by reclaiming the principles of interoperability and decentralization from the early internet. In short, it puts protocols over platforms and lets people own their connections with others. Whether you choose a Fediverse app like Mastodon or an ATmosphere app like Bluesky, your audience and community stay within reach. It’s a vision of social media akin to our lives offline: you decide who to be in touch with and how, and no central authority can threaten to snuff out those connections. It’s social media for humans, not advertisers and authoritarians.
Behind that vision is a beautiful mess of protocols bringing the open social media web to life. Each protocol is a unique language for applications, determining how and where messages are sent. While this means there is great variety to these projects, it also means everyone who spins up a server, develops an app, or otherwise hosts others’ speech has skin in the game when it comes to defending Section 230.
What exactly is Section 230?Section 230 protects freedom of expression online by protecting US intermediaries that make the internet work. Passed in 1996 to preserve the new bubbling communities online, 230 enshrined important protections for free expression and the ability to block or filter speech you don’t want on your site. One portion is credited as the “26 words that created the internet”:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In other words, this bipartisan law recognizes that speech online relies on intermediaries — services that deliver messages between users — and holding them potentially liable for any message they deliver would only stifle that speech. Intuitively, when harmful speech occurs, the speaker should be the one held accountable. The effect is that most civil suits against users and services based on others' speech can quickly be dismissed, avoiding the most expensive parts of civil litigation.
Section 230 was never a license to host anything online, however. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims.
What Section 230 has enabled, however, is the freedom and flexibility for online communities to self-organize. Without the specter of one bad actor exposing the host(s) to serious legal threats, intermediaries can moderate how they see fit or even defer to volunteers within these communities.
Why the Open Social Web Needs Section 230The superpower of decentralized systems like the Fediverse is the ability for thousands of small hosts to each shoulder some of the burdens of hosting. No single site can assert itself as a necessary intermediary for everyone; instead, all must collaborate to ensure messages reach the intended audience. The result is something superior to any one design or mandate. It is an ecosystem that is greater than the sum of its parts, resilient to disruptions, and free to experiment with different approaches to community governance.
The open social web’s kryptonite though, is the liability participants can face as intermediaries. The greater the potential liability, the more interference from powerful interests in the form of legal threats, more monetary costs, and less space for nuance in moderation. And in practice, participants may simply stop hosting to avoid those risks. The end result is only the biggest and most resourced options can survive.
This isn’t just about the hosts in the Open Social Web, like Mastodon instances or Bluesky PDSes. In the U.S., Section 230’s protections extend to internet users when they distribute another person’s speech. For example, Section 230 protects a user who forwards an email with a defamatory statement. On the open social web, that means when you pass along a message to others through sharing, boosting, and quoting, you’re not liable for the other user’s speech. The alternative would be a web where one misclick could open you up to a defamation lawsuit.
Section 230 also applies to the infrastructure stack, too, like Internet service providers, content delivery networks, domain, and hosting providers. Protections even extend to the new experimental infrastructures of decentralized mesh networks.
Beyond the existential risks to the feasibility of indie decentralized projects in the United States, weakening 230 protections would also make services worse. Being able to customize your social media experience from highly curated to totally laissez-faire in the open social web is only possible when the law allows space for private experiments in moderation approaches. The algorithmically driven firehose forced on users by antiquated social media giants is driven by the financial interests of advertisers, and would only be more tightly controlled in a post-230 world.
Defending 230Laws aimed at changing 230 protections put decentralized projects like the open social web in a uniquely precarious position. That is why we urge lawmakers to take careful consideration of these impacts. It is also why the proponents and builders of a better web must be vigilant defenders of the legal tools that make their work possible.
The open social web embodies what we are protecting with Section 230. It’s our best chance at building a truly democratic public interest internet, where communities are in control.
令和8年春の叙勲
令和8年春の叙勲(消防関係)
Tell Congress: Oppose the GUARD Act
Congress is moving quickly on the GUARD Act, with a key vote expected Thursday. This bill would force AI companies collect sensitive ID or biometric information to verify every user’s age, ban teens from using many everyday digital tools to speak, learn, or ask questions online.
【寄稿】富士を撃つな!富士から撃つな! 弾道・巡航ミサイルの配備許さない 軍拡に強い危機感 柔軟な運動体めざす=望月吉春(「富士にミサイルやめて!の会」事務局長)
40+ Organizations Worldwide Urge Meta to Stop Financially Enabling Settler Violence Against Palestinians
添加物専門調査会(第206回)の開催について【5月7日開催】
RightsCon 2026: Preparatory recommended resources
規制改革推進会議デジタル・AIワーキング・グループ(第9回)
The GUARD Act Isn’t Targeting Dangerous AI—It’s Blocking Everyday Internet Use
Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.
Tell Congress: oppose the guard act
If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.
The concerns behind this bill are serious. There have been troubling reports of AI systems engaging in harmful interactions with young users, including cases involving self-harm. Those risks deserve attention. But they call for targeted solutions, like better safeguards and enforcement against bad actors, not sweeping restrictions. The bill’s sponsors say they’re targeting worst-case scenarios — but the bill regulates everyday use.
The GUARD Act’s Broad Definitions Reach Everyday ToolsThe problem starts with how the bill defines an “AI chatbot.” It covers any system that generates responses that aren’t fully pre-written by the developer or operator. Such a broad definition sweeps in the basic functionality of all AI-powered tools.
Then there’s the definition of an “AI companion,” which minors are banned from using entirely. An AI companion is any chatbot that produces human-like responses and is designed to “encourage or facilitate” interpersonal or emotional interaction. That may sound aimed at simulated “friends” or therapy chatbots. But in practice, it’s much fuzzier.
Modern chatbots are designed to be conversational and helpful. A homework helper might say “good question” before walking a student through a problem. A customer service chatbot may respond empathetically to a complaint (“I’m sorry you’re having this problem.”) A general-purpose assistant might ask follow-up questions. All of these could be seen as facilitating “interpersonal” interaction — and triggering the GUARD Act.
Faced with steep penalties and unclear boundaries, companies are unlikely to take chances on letting young people use their online tools. They’ll block minors entirely or strip their tools down to something less useful for everyone. The result isn’t a narrow safeguard—it’s a broad restriction on everyday online interactions.
Homework Question? Show ID And Call Your ParentsStart with a student getting help with homework. Under the GUARD Act, the service must verify the user’s age using more than a simple checkbox—it must rely on a “reasonable age verification” measure, which could require a government ID or a third-party age-checking system. If the system decides a user is under 18, the company must decide if its tool qualifies as an “AI Companion.” If there’s any risk it does, the safest move is to block access entirely.
The same logic applies to everyday customer service. A teenager trying to fix an order issue gets routed to a chatbot, and the company faces a choice: build a full age-verification system for a routine interaction, or restrict access to avoid liability. Many will choose the latter.
This isn’t a narrow restriction aimed at a few risky products. It’s a compliance regime that pushes companies to block or limit any product that generates text for minors, across the board.
ID Checks for EveryoneThe GUARD Act doesn’t just affect minors. The bill takes a big step towards an internet that only works when users are willing to upload a valid ID or comply with other invasive age-verification schemes. Companies must verify the age of every user—not through a simple self-declaration, but through a “reasonable age verification” system tied to the individual.
In practice, that means collecting sensitive personal information: government IDs, financial data, or biometric identifiers. Companies can outsource verification, but they remain legally responsible. And the law requires ongoing verification, so this isn’t a one-time check. Worse, studies consistently show that millions of people have outdated information on their IDs, such as an old address, or do not have government ID. Should services require ID, many folks without current or any ID will be shut out.
And for those who do have compliant ID, turning over this information repeatedly creates obvious risks. Databases of sensitive identity information become targets for breaches. Anonymous or pseudonymous use of online tools becomes harder or impossible.
To keep minors away from certain chatbots, the GUARD Act would require everyone to prove who they are just to use basic online tools. That’s a steep tradeoff. And it doesn’t actually address the specific harms the bill is supposed to solve.
Vague Definitions, Huge PenaltiesThe GUARD Act’s broad scope is enforced with steep penalties. Companies can face fines of up to $100,000 per violation, enforced by federal and state officials. At the same time, key terms like “AI companion” rely on vague concepts such as “emotional interaction.” That combination will lead to overblocking. Faced with legal uncertainty and serious liability, companies won’t parse small distinctions. They’ll restrict access, limit features, or block minors entirely.
That is the unfortunate result of the GUARD Act, even though the concerns animating it are worthy of fixing. But the GUARD Act’s broad terms will apply far beyond the concerning scenarios.
In the end, that means a more restricted and more surveilled internet. Teenagers would lose access to tools they rely on for school and everyday tasks. Everyone else faces new barriers, including ID checks. Smaller developers, who aren’t able to absorb compliance costs and legal risk, would be pushed out, leaving the largest companies even more dominant.
Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn’t do that. It trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark.
Congress could act soon. Tell them to reject the GUARD Act.
Tell Congress: say no to mandatory online id checks
Congress Has Until April 30 to Take Action on 702. Tell Them Not to Drop The Ball
There are no excuses for any Member of Congress to support a clean reauthorization of Section 702. Anyone who votes to do so does not take your privacy seriously. Full stop.
Section 702 of the Foreign Intelligence Surveillance Act (FISA) is among the United States’ most infamous mass surveillance programs. Sold to the public as a foreign surveillance tool, it has become a backdoor for law enforcement to search through Americans’ private communications without ever obtaining a warrant. We need to act now to prevent Congress from reauthorizing 702 in a way that ignores the truth: This authority needs to change.
House Speaker Mike Johnson has confirmed that “the plan is to move a clean extension of FISA… for at least 18 months.” Our demands are common sense: no renewal without real reforms. A simple extension is a betrayal of every US resident who expects their government to respect their rights and the Constitution.
Your representative needs to hear from you right now, before the April 30 deadline. Contact them today.
Tell them: No vote on any bills that would reauthorize Section 702 without meaningful reform.
APC at RightsCon 2026
Congress Must Reject New Insufficient 702 Reauthorization Bill
Speaker Johnson has introduced a new fig leaf over the American surveillance state, the Foreign Intelligence Accountability Act. Introduced with only days to go before Section 702 of the Foreign Intelligence Surveillance Act (FISA) expires and the U.S. government loses one of its most invasive surveillance programs, the bill does nothing to make any of the substantial changes privacy advocates have been asking for --- most notably, it fails to give us a real warrant requirement for the FBI to snoop through the private conversations of people on U.S. soil.
Section 702 needs to be reauthorized by Congress every few years. These reauthorizations give us a chance to tinker with the language of the law and introduce some much-needed reforms. This attempt at reauthorization has been particularly fraught, but there is still time for Congress to include real protection for Americans’ civil liberties and rights. We need to make sure that when an FBI agent wants to look through Americans’ conversations scooped up as part of a national security intelligence program, they need a warrant signed by a judge just as if they were trying to search your email account or your house.
This new bill mandates that a civil liberties protection officer at the Director of National Intelligence review all queries of U.S. persons made by the FBI under this program to make sure no laws have been broken. It’s bad enough to let the intelligence community police itself, and what’s more, the assessment for illegality would be made after a U.S. person has already been spied on. This is hardly the reform we need and will likely just lead to continued abuse with no real accountability or consequences.
The bill “prohibits targeting United States persons,” but so does current law. This “change” does absolutely nothing to address what’s really happening—which is that surveillance of people in the United States is usually justified as “incidental” because Americans aren’t the “target” of the surveillance. The bill does not create a warrant requirement, it does not create any new transparency requirements, and it does not protect Americans’ privacy.
We urge Congress, and we urge you to write to your Congresspeople, to tell them this: Reject the surveillance state’s latest smokescreen known as the Foreign Intelligence Accountability Act and keep pushing for real reforms.