EFF Urges Pennsylvania Supreme Court to Find Keyword Search Warrant Unconstitutional

3 months 2 weeks ago
These Dragnet Searches Violate the Privacy of Millions of Americans

SAN FRANCISCO—Keyword warrants that let police indiscriminately sift through search engine databases are unconstitutional dragnets that target free speech, lack particularity and probable cause, and violate the privacy of countless innocent people, the Electronic Frontier Foundation (EFF) and other organizations argued in a brief filed today to the Supreme Court of Pennsylvania. 

Everyone deserves to search online without police looking over their shoulder, yet millions of innocent Americans’ privacy rights are at risk in Commonwealth v. Kurtz—only the second case of its kind to reach a state’s highest court. The brief filed by EFF, the National Association of Criminal Defense Lawyers (NACDL), and the Pennsylvania Association of Criminal Defense Lawyers (PACDL) challenges the constitutionality of a keyword search warrant issued by the police to Google. The case involves a massive invasion of Google users’ privacy, and unless the lower court’s ruling is overturned, it could be applied to any user using any search engine. 

“Keyword search warrants are totally incompatible with constitutional protections for privacy and freedom of speech and expression,” said EFF Surveillance Litigation Director Andrew Crocker. “All keyword warrants—which target our speech when we seek information on a search engine—have the potential to implicate innocent people who just happen to be searching for something an officer believes is somehow linked to a crime. Dragnet warrants that target speech simply have no place in a democracy.” 

Users have come to rely on search engines to routinely seek answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant. Google keeps detailed information on every search query it receives, however, resulting in a vast record of users’ most private and personal thoughts, opinions, and associations that police seek to access by merely demanding the identities of all users who searched for specific keywords. 

Because this data is so broad and detailed, keyword search warrants are especially concerning: Unlike typical warrants for electronic information, these do not target specific people or accounts. Instead, they require a provider to search its entire reserve of user data to identify any and all users or devices who searched for words or phrases specified by police. As in this case, the police generally have no identified suspects when they seek such a warrant; instead, the sole basis is the officer’s hunch that the perpetrator might have searched for something related to the crime.  

This violates the Pennsylvania Constitution’s Article I, Section 8 and the Fourth Amendment to the U.S. Constitution, EFF’s brief argued, both of which were inspired by 18th-century writs of assistance—general warrants that let police conduct exploratory rummaging through a person’s belongings. These keyword search warrants also are especially harmful because they target protected speech and the related right to receive information, the brief argued. 

"Keyword search warrants are digital dragnets giving the government permission to rummage through our most private information, and the Pennsylvania Supreme Court should find them unconstitutional,” said NACDL Fourth Amendment Center Litigation Director Michael Price. 

“Search engines are an indispensable tool for finding information on the Internet, and the ability to use them—and use them anonymously—is critical to a free society,” said Crocker. “If providers can be forced to disclose users’ search queries in response to a dragnet warrant, it will chill users from seeking out information about anything that police officers might conceivably choose as a searchable keyword.” 

For the brief: https://www.eff.org/document/commonwealth-v-kurtz-amicus-brief-pennsylvania-supreme-court-1-5-2024

For a similar case in Colorado: https://www.eff.org/deeplinks/2023/10/colorado-supreme-court-upholds-keyword-search-warrant 

Contact:  AndrewCrockerSurveillance Litigation Directorandrew@eff.org
Josh Richman

AI Watermarking Won't Curb Disinformation

3 months 2 weeks ago

Generative AI allows people to produce piles upon piles of images and words very quickly. It would be nice if there were some way to reliably distinguish AI-generated content from human-generated content. It would help people avoid endlessly arguing with bots online, or believing what a fake image purports to show. One common proposal is that big companies should incorporate watermarks into the outputs of their AIs. For instance, this could involve taking an image and subtly changing many pixels in a way that’s undetectable to the eye but detectable to a computer program. Or it could involve swapping words for synonyms in a predictable way so that the meaning is unchanged, but a program could readily determine the text was generated by an AI.

Unfortunately, watermarking schemes are unlikely to work. So far most have proven easy to remove, and it’s likely that future schemes will have similar problems.

One kind of watermark is already common for digital images. Stock image sites often overlay text on an image that renders it mostly useless for publication. This kind of watermark is visible and is slightly challenging to remove since it requires some photo editing skills.

anemone-occidentalis-watermarked.jpg

Images can also have metadata attached by a camera or image processing program, including information like the date, time, and location a photograph was taken, the camera settings, or the creator of an image. This metadata is unobtrusive but can be readily viewed with common programs. It’s also easily removed from a file. For instance, social media sites often automatically remove metadata when people upload images, both to prevent people from accidentally revealing their location and simply to save storage space.

A useful watermark for AI images would need two properties: 

  • It would need to continue to be detectable after an image is cropped, rotated, or edited in various ways (robustness). 
  • It couldn’t be conspicuous like the watermark on stock image samples, because the resulting images wouldn’t be of much use to anybody.

One simple technique is to manipulate the least perceptible bits of an image. For instance, to a human viewer these two squares are the same shade:

two green boxes

But to a computer it’s obvious that they are different by a single bit: #93c47d vs 93c57d. Each pixel of an image is represented by a certain number of bits, and some of them make more of a perceptual difference than others. By manipulating those least-important bits, a watermarking program can create a pattern that viewers won’t see, but a watermarking-detecting program will. If that pattern repeats across the whole image, the watermark is even robust to cropping. However, this method has one clear flaw: rotating or resizing the image is likely to accidentally destroy the watermark.

There are more sophisticated watermarking proposals that are robust to a wider variety of common edits. However, proposals for AI watermarking must pass a tougher challenge. They must be robust against someone who knows about the watermark and wants to eliminate it. The person who wants to remove a watermark isn’t limited to common edits, but can directly manipulate the image file. For instance, if a watermark is encoded in the least important bits of an image, someone could remove it by simply setting all the least important bits to 0, or to a random value (1 or 0), or to a value automatically predicted based on neighboring pixels. Just like adding a watermark, removing a watermark this way gives an image that looks basically identical to the original, at least to a human eye.

Coming at the problem from the opposite direction, some companies are working on ways to prove that an image came from a camera (“content authenticity”). Rather than marking AI generated images, they add metadata to camera-generated images, and use cryptographic signatures to prove the metadata is genuine. This approach is more workable than watermarking AI generated images, since there’s no incentive to remove the mark. In fact, there’s the opposite incentive: publishers would want to keep this metadata around because it helps establish that their images are “real.” But it’s still a fiendishly complicated scheme, since the chain of verifiability has to be preserved through all software used to edit photos. And most cameras will never produce this metadata, meaning that its absence can’t be used to prove a photograph is fake.

Comparing watermarking vs content authenticity, watermarking aims to identify or mark (some) fake images; content authenticity aims to identify or mark (some) real images. Neither approach is comprehensive, since most of the images on the Internet will have neither a watermark nor content authenticity metadata.

Watermarking Content authenticity AI images Marked Unmarked (Some) camera images Unmarked Marked Everything else Unmarked Unmarked

 

Text-based Watermarks

The watermarking problem is even harder for text-based generative AI. Similar techniques can be devised. For instance, an AI could boost the probability of certain words, giving itself a subtle textual style that would go unnoticed most of the time, but could be recognized by a program with access to the list of words. This would effectively be a computer version of determining the authorship of the twelve disputed essays in The Federalist Papers by analyzing Madison’s and Hamilton’s habitual word choices.

But creating an indelible textual watermark is a much harder task than telling Hamilton from Madison, since the watermark must be robust to someone modifying the text trying to remove it. Any watermark based on word choice is likely to be defeated by some amount of rewording. That rewording could even be performed by an alternate AI, perhaps one that is less sophisticated than the one that generated the original text, but not subject to a watermarking requirement.

There’s also a problem of whether the tools to detect watermarked text are publicly available or are secret. Making detection tools publicly available gives an advantage to those who want to remove watermarking, because they can repeatedly edit their text or image until the detection tool gives an all clear. But keeping them a secret makes them dramatically less useful, because every detection request must be sent to whatever company produced the watermarking. That would potentially require people to share private communication if they wanted to check for a watermark. And it would hinder attempts by social media companies to automatically label AI-generated content at scale, since they’d have to run every post past the big AI companies.

Since text output from current AIs isn’t watermarked, services like GPTZero and TurnItIn have popped up, claiming to be able to detect AI-generated content anyhow. These detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism.

Lastly, if AI watermarking is to prevent disinformation campaigns sponsored by states, it’s important to keep in mind that those states can readily develop modern generative AI, and probably will in the near future. A state-sponsored disinformation campaign is unlikely to be so polite as to watermark its output.

Watermarking of AI generated content is an easy-sounding fix for the thorny problem of disinformation. And watermarks may be useful in understanding reshared content where there is no deceptive intent. But research into adversarial watermarking for AI is just beginning, and while there’s no strong reason to believe it will succeed, there are some good reasons to believe it will ultimately fail.

Jacob Hoffman-Andrews

[B] 「民族浄化、民族排除は凶悪国家犯罪」【西サハラ最新情報】  平田伊都子

3 months 2 weeks ago
アメリカが幇助、というよりアメリカ主犯の、イスラエルによるパレスチナ人ジェノサイド(民族大虐殺)は、年が明けてもおさまらず、元旦も、イスラエルは156人のガザ市民を爆撃で殺しました。 2023年10月7日にガザ戦争が始まって1月5日までに、イスラエルは22,600人のガザ市民を殺しました。 2023年12月30日の記者会見でネタニヤフ・イスラエル軍事政権首相は殺害数を質され、「ハマスの出す数字は信用していない、、イスラエルは8,000人以上のハマス戦闘員を殺害した」と、嘯きました。  8,800人以上という数字は、イスラエルが殺した子供の数です。 一方、バイデン・アメリカ大統領は、ネタニヤフの言う通りと、頷いています。
日刊ベリタ

【オンライン講演】「後世に事実を」被爆者の願い叶えた 23年度JCJ賞『「黒い雨」訴訟』の著者・小山美砂氏語る=橋詰雅博

3 months 2 weeks ago
                         2023年度JCJ賞受賞者オンライン講演のトップバッターは『黒い雨訴訟』(集英社新書、22年7月発行)の著者・小山美砂氏=写真=。「原爆『黒い雨訴訟』に学んだジャーナリストの仕事」と題した11月19日講演では毎日新聞記者としての広島被爆者取材やメディアの報道姿勢への疑問を語り、昨年末退職後、フリーランスになったジャーナリスト活動も報告した。 大阪市出身の小山氏が縁もゆかりもない広島の原爆に強い関心を持ったきっかけは同志社..
JCJ

Meta Quest 2が値下げ、128GBモデルは発売当初の3万円台に

3 months 2 weeks ago
米Metaは12月31日、PCやコンソールを必要としないスタンドアロンのVRヘッドセット「Meta Quest 2」の販売価格を下げると発表した。128GBモデルは1月1日から、従来の4万7300円から3万9600円に、256GBモデルは5万3900円から4万6200円に値下げされる。すべてのモデルで7700円の引き下げとなった(Meta Quest 2公式、窓の杜)。 この値下げは、11月12日から12月31日まで行われていた7700円引きキャンペーンが終了、これがそのまま通常価格に適用された形となっている。「Meta Quest 2」は発売当初は3万円台で販売されていたが、2022年には製造・出荷コストの上昇や円安の影響で価格が上昇していた。今回の値下げにより、最低価格が発売当初の3万円台に戻ったことになる。

すべて読む | ハードウェアセクション | ハードウェア | グラフィック | ニュース | Facebook | 仮想化 | お金 |

関連ストーリー:
Metaの新型VR HMD「Meta Quest3」が10月10日発売、7万4800円から 2023年10月02日
VR HMD「Meta Quest 2」が8月から大幅値上げ 2022年07月28日

nagazou

羽田衝突事故は羽田空港の強引な過密化による人災だ/安全問題研究会

3 months 2 weeks ago
2024年は新年早々、元日の能登半島地震に続き、2日は羽田空港でJAL機と海上保安庁機の衝突事故が起きた。今年も激動の年になることは決まったようなものだ。私は、海上保安庁の機体と衝突したという事故の一報を聞いたとき、北陸震災絡みだとすぐにピンと来た。海上事故や災害などの緊急時に備えた要員以外の海上保安庁職員は正月休みであり、この時期に海保の飛行機が大規模な運航をすることが珍しいからだ。これまでの報道では、海保側が航空管制官の指示を聞き間違えたのではないかということが報道されているが、マスコミが報じない本当の背景に「羽田の再国際化、過密化」があることは指摘しておきたい。