労働力調査(詳細集計)2024年(令和6年)10〜12月期平均及び2024年(令和6年)平均
シンポジウム「安心・安全なメタバースの利活用促進を考える」の開催
電気通信紛争処理委員会(第248回)の開催について
令和6年度 起業家甲子園・起業家万博の開催
情報通信審議会 郵政政策部会 郵便料金政策委員会(第8回) 開催案内
第128回産業統計部会
【月刊マスコミ評・放送】NHK地元放送局の長期密着取材の成果=諸川麻衣
First Trump DOJ Assembled “Tiger Team” To Rewrite Key Law Protecting Online Speech
As President Donald Trump issued an Executive Order in 2020 to retaliate against online services that fact-checked him, a team within the Department of Justice (DOJ) was finalizing a proposal to substantially weaken a key law that protects internet users’ speech.
Documents released to EFF as part of a Freedom of Information Act (FOIA) suit reveal that the DOJ officials—a self-described “Tiger Team”—were caught off guard by Trump’s retaliatory effort, which was aimed at the same online social services they wanted to regulate further by amending 47 U.S.C. § 230 (Section 230).
Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.
The correspondence among DOJ officials shows that the group delayed unveiling the agency’s official plans to amend Section 230 in light of Trump’s executive order, which was challenged on First Amendment grounds and later rescinded by President Joe Biden. EFF represented the groups who challenged Trump’s Executive Order and filed two FOIA suits for records about the administration’s implementation of the order.
In the most recent FOIA case, the DOJ has been slowly releasing records detailing its work to propose amendments to Section 230, which predated Trump’s Executive Order. The DOJ released the text of its proposed amendments to Section 230 in September 2020, and the proposal would have substantially narrowed the law’s protections.
For example, the DOJ’s proposal would have allowed federal civil suits and state and federal criminal prosecutions against online services if they learned that users’ content broke the law. It also would have established notice-and-takedown liability for user-generated content that was deemed to be illegal. Together, these provisions would likely result in online services screening and removing a host of legal content, based on a fear that any questionable material might trigger liability later.
The DOJ’s proposal had a distinct emphasis on imposing liability on services should they have hosted illegal content posted by their users. That focus was likely the result of the team DOJ assembled to work on the proposal, which included officials from the agency’s cybercrime division and the FBI.
The documents also show that DOJ officials met with attorneys who brought lawsuits against online services to get their perspective on Section 230. This is not surprising, as the DOJ had been meeting with multiple groups throughout 2020 while it prepared a report about Section 230.
EFF’s FOIA suit is ongoing, as the DOJ has said that it still has thousands of potential pages to review and possibly release. Although these documents reflect DOJ’s activity from Trump’s first term, they are increasingly relevant as the administration appoints officials who have previously threatened online intermediaries for exercising their own First Amendment rights. EFF will continue to publish all documents released in this FOIA suit and push back on attempts to undermine internet users’ rights to speak online.
Google is on the Wrong Side of History
Google continues to show us why it chose to abandon its old motto of “Don’t Be Evil,” as it becomes more and more enmeshed with the military-industrial complex. Most recently, Google has removed four key points from its AI principles. Specifically, it previously read that the company would not pursue AI applications involving (1) weapons, (2) surveillance, (3) technologies that “cause or are likely to cause overall harm,” and (4) technologies whose purpose contravenes widely accepted principles of international law and human rights.
Those principles are gone now.
In its place, the company has written that “democracies” should lead in AI development and companies should work together with governments “to create AI that protects people, promotes global growth, and supports national security.” This could mean that the provider of the world’s largest search engine–the tool most people use to uncover the best apple pie recipes and to find out what time their favorite coffee shop closes–could be in the business of creating AI-based weapons systems and leveraging its considerable computing power for surveillance.
This troubling decision to potentially profit from high-tech warfare, which could have serious consequences for real lives and real people comes after criticism from EFF, human rights activists, and other international groups. Despite its pledges and vocal commitment to human rights, Google has faced criticism for its involvement in Project Nimbus, which provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. EFF said in 2024, “When a company makes a promise, the public should be able to rely on it.” Rather than fully living up to its previous human rights commitments, it seems Google has shifted its priorities.
Google is a company valued at $2.343 trillion that has global infrastructure and a massive legal department and appears to be leaning into the current anti-humanitarian moment. The fifth largest company in the world seems to have chosen to make the few extra bucks (relative to the company’s earnings and net worth) that will come from mass surveillance tools and AI-enhanced weapons systems.
And of course we can tell why. With government money flying out the door toward defense contractors, surveillance technology companies, and other national security and policing related vendors, the legacy companies who swallow up all of that data don’t want to miss out on the feeding frenzy. With $1 billion contracts on the table even for smaller companies promising AI-enhanced tech, it looks like Google is willing to throw its lot in with the herd.
In addition to Google and Amazon’s involvement with Project Nimbus, which involved both cloud storage for the large amounts of data collected from mass surveillance and analysis of that data, there are many other scenarios and products on the market that could raise concerns. AI could be used to power autonomous weapons systems which decide when and if to pull the trigger or drop a bomb. Targeting software can mean physically aiming weapons at people identified by geolocation or by other types of machine learning like face recognition or other biometric technology. AI could also be used to sift through massive amounts of intelligence, including intercepted communications or publicly available information from social media and the internet in order to assemble lists of people to be targeted by militaries.
Whether autonomous AI-based weapons systems and surveillance are controlled by totalitarian states or states that meet Google’s definition of “democracy”, is of little comfort to the people who could be targeted, spied on, or killed in error by AI technology which is prone to mistakes. AI cannot be accountable for its actions. If we, the public, are able to navigate the corporate, government, and national security secrecy to learn of these flaws, companies will fall on a playbook we’ve seen before: tinkering with the algorithms and declaring the problem solved.
We urge Google, and all of the companies that will follow in its wake, to reverse course. In the meantime, users will have to decide who deserves their business. As the company’s most successful product, its search engine, is faltering, that decision gets easier and easier.