Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

5 days 1 hour ago

OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillance—early reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the deal—Sam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

"After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time."

“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

Corynne McSherry

【南鳥島3】筆者は同島に陸自がミサイル射撃場を整備、処分地が幻にという記事を24年7月31日Daily JCJに掲載。再掲載しました。幻が転じて現実になってほしいと願う=橋詰雅博

5 days 1 hour ago
【焦点】南鳥島に陸自がミサイル射撃場を整備 核のゴミ最終処分地説が幻に=橋詰雅博 (http://jcj-daily.seesaa.net/article/504171775.html) 防衛省は 東京から1900㎞の日本最東端の小さな島、南鳥島(小笠原村の一部)に陸上自衛隊が保有する「12式地対空ミサイル」の射撃場を整備する。このミサイルは地上から海上の艦艇に攻撃するもので、射程100㌔を超える射撃場の整備は国内で初めて。すでに小笠原村へは計画を説明済みで、2026年以降、..
JCJ

[B] 「ネタニヤにはめられたトランプ」【西サハラ最新情報】  平田伊都子

5 days 4 hours ago
2026年2月28日、ラマダン月の最中に、ネタニヤフとトランプはイランの首都テヘランなどの都市を空爆し、イランの最高宗教指導者ハメネイ師を爆殺しました。 その後もネタニヤフ・トランプ連合軍は意図的に民間地域を狙い3,000回以上の空爆を続け、1230人のイラン人を殺しました。 イラン南部ミナブの女学校では165人の女生徒を殺しました。 イラン国営通信の発表です。 「手を緩めるな。今こそ一気にイラン人を抹殺する時だ」と、ネタニヤフは第二波大攻撃を3月4日に開始しました。 ガザ戦争と同じ、ユダヤの経典<サムエル記15章第3節・皆殺しの教え>です!
日刊ベリタ

[B] 【分析レポート】イラン攻撃が逆に準備する「覇権交代」 〜テクノロジーの過信が招く西側システムの限界と、日本の構造的死角〜 李憲彦

5 days 13 hours ago
中東における米軍およびイスラエルによる軍事行動は、表向きは秩序の維持や核開発阻止を掲げているが、その背後には対中・対ロ・対北朝鮮に向けた「AI軍事技術の圧倒的なデモンストレーション」という戦略的意図が存在する 。数百万ドル規模の安価なドローンネットワークとAIを駆使した精密攻撃は、西側諸国の技術的優位性を誇示する「投資としての軍事」として機能している 。
日刊ベリタ