President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

2 weeks 1 day ago

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Less Accuracy, More Bias and Discrimination

It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them. 

This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. 

These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. 

Biased LLMs Cause Serious Harm—Especially in the Hands of the Government

But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.

We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. 

And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. 

We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.

Tori Noble

【連続シンポ】放送独立行政委 早く 「NHK復活の好機」=河野慎二

2 weeks 2 days ago
 連続シンポジウム(第2回)「NHKはどうすれば政権から自立できるのか~放送の独立行政委員会制度を考える~」が6月29日、立教大学で開催された。 シンポジウムでは、2001年1月に起きたNHK「ETV2001『問われる戦時性暴力』に対する番組改変事件」について、元NHKディレクターの池田恵理子氏が、安倍晋三官房副長官(当時、後首相)の剥き出しの介入の事実を振り返り「番組の基になった『女性国際戦犯法廷』(2000年12月開催)には海外メディアは95社200名、国内メディアは48..
JCJ

[B] 戦後80年の日本・「新しい戦前」か「すでに戦中」か(上) 「戦争が廊下の奥に立ってゐた」

2 weeks 2 days ago
戦後80年が近づくにつれ、「新しい戦前」への不安が高まってきた。不安をかきたてる新たなきっかけとなったのは、ウクライナ戦争に便乗した岸田政権の大軍拡政策である。また、日本は「すでに戦中」に入っているとの見方もある。どちらが正しいかはさておき、「平和国家」が歴史的分岐点に立たされていることは間違いない。私たちの現在地を確認してみたい。(永井浩)
日刊ベリタ

Weekly Report: トレンドマイクロ製企業向けエンドポイントセキュリティ製品に複数のOSコマンドインジェクションの脆弱性に関する注意喚起

2 weeks 2 days ago
トレンドマイクロ株式会社が提供する企業向けエンドポイントセキュリティ製品には、複数のOSコマンドインジェクションの脆弱性があります。同社は、一部の脆弱性を悪用する攻撃をすでに確認しているとのことです。この問題に対する一時的な緩和策として、製品へのFixtoolの適用が推奨されています。なお、トレンドマイクロ株式会社は、2025年8月中旬に本脆弱性の恒久対策として、Critical Patchのリリースを予定しています。詳細は、開発者が提供する情報を参照してください。

高江座り込み18周年報告集会のお知らせ

2 weeks 3 days ago
高江座り込み18周年報告集会 講演 「団結してたたかう 軍事主義と帝国主義に反対する グアハン(グアム)と沖縄の連帯」 モネッカ・フローレス(プルテヒ・グアハン) 司会 KEN子 日時:2025年9月7日(日)13時~15時 場所:東村農民研修施設 (村営グラウンド向かい 公民館2階) 〒905-1204 沖縄県国頭郡東村平良550-4 ・入場無料 主催: 「ヘリパッドいらない」住民の会 ブログ:http://takae.ti-da.net お問合せ:tel:090-9789-6396 fax:0980-51-2688 mail:info@nohelipadtakae.org
高江イイトコ