火災報知設備の感知器及び発信機に係る技術上の規格を定める省令及び特定小規模施設における必要とされる防火安全性能を有する消防の用に供する設備等に関する省令の一部を改正する省令(案)等に対する意見公募の結果及び改正省令等の公布

4 days 3 hours ago
火災報知設備の感知器及び発信機に係る技術上の規格を定める省令及び特定小規模施設における必要とされる防火安全性能を有する消防の用に供する設備等に関する省令の一部を改正する省令(案)等に対する意見公募の結果及び改正省令等の公布
総務省

Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

4 days 3 hours ago

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo.  

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

David Greene

[B] 夏休みに起こる悲劇〜チャオ!イタリア通信(サトウノリコ)

4 days 4 hours ago
7月18日、ヴェネツィア市で1歳半の女の子がお父さんの車の中で死亡するという事故がありました。お父さんは仕事に行く途中で、女の子を保育園か親戚に預けるのを忘れてしまったのが原因です。日本でも時々起こる事故ですが、どうして親が子どものことを忘れるんだろうと信じられない話ですよね。(サトウノリコ=イタリア在住)
日刊ベリタ

【出版界の動き】本の需要喚起のためのユニークな取り組みと挑戦=出版部会

4 days 7 hours ago
◆第171回・芥川賞と直木賞が決まる。受賞作の短い紹介<芥川賞>朝比奈秋『サンショウウオの四十九日』(新潮社) 同じ身体なのに半身は姉、もう半身は妹、その驚きに満ちた人生を描く。周りからは一人に見えるが、でも私のすぐ隣にいるのは別のわたし。隣のあなたは誰なのか? 姉妹は考える、そして今これを考えているのは誰なのか。著者はこれまで勤めてきた医師としての経験と驚異の想像力を駆使して、人が生きることの普遍を描く、世界が初めて出会う物語。松永K三蔵『バリ山行』(講談社 2024/7/..
JCJ

Why Privacy Badger Opts You Out of Google’s “Privacy Sandbox”

4 days 7 hours ago

Update July 22, 2024: Shortly after we published this post, Google announced it's no longer deprecating third-party cookies in Chrome. We've updated this blog to note the news.

The latest update of Privacy Badger opts users out of ad tracking through Google’s “Privacy Sandbox.” 

Privacy Sandbox is Google’s way of letting advertisers keep targeting ads based on your online behavior without using third-party cookies. Third-party cookies were once the most common form of online tracking technology, but major browsers, like Safari and Firefox, started blocking them several years ago. After pledging to eventually do the same for Chrome in 2020, and after several delays, today Google backtracked on its privacy promise, announcing that third-party cookies are here to stay. Notably, Google Chrome continues to lag behind other browsers in terms of default protections against online tracking.

Privacy Sandbox might be less invasive than third-party cookies, but that doesn’t mean it’s good for your privacy. Instead of eliminating online tracking, Privacy Sandbox simply shifts control of online tracking from third-party trackers to Google. With Privacy Sandbox, tracking will be done by your Chrome browser itself, which shares insights gleaned from your browsing habits with different websites and advertisers. Despite sounding like a feature that protects your privacy, Privacy Sandbox ultimately protects Google's advertising business.

How did Google get users to go along with this? In 2023, Chrome users received a pop-up about “Enhanced ad privacy in Chrome.” In the U.S., if you clicked the “Got it” button to make the pop-up go away, Privacy Sandbox remained enabled for you by default. Users could opt out by changing three settings in Chrome. But first, they had to realize that "Enhanced ad privacy" actually enabled a new form of ad tracking.

You shouldn't have to read between the lines of Google’s privacy-washing language to protect your privacy. Privacy Badger will do this for you!

Three Privacy Sandbox Features That Privacy Badger Disables For You

If you use Google Chrome, Privacy Badger will update three different settings that constitute Privacy Sandbox:

  • Ad topics: This setting allows Google to generate a list of topics you’re interested in based on the websites you visit. Any site you visit can ask Chrome what topics you’re supposedly into, then display an ad accordingly. Some of the potential topics–like “Student Loans & College Financing”, “Credit Reporting & Monitoring”, and “Unwanted Body & Facial Hair Removal”–could serve as proxies for sensitive financial or health information, potentially enabling predatory ad targeting. In an attempt to prevent advertisers from identifying you, your topics roll over each week and Chrome includes a random topic 5% of the time. However, researchers found that Privacy Sandbox topics could be used to re-identify users across websites. Using 1,207 people’s real browsing histories, researchers showed that as few as three observations of a person’s “ad topics” was enough to identify 60% of users across different websites.

  • Site-suggested ads: This setting enables "remarketing" or "retargeting," which is the reason you’re constantly seeing ads for things you just shopped for online. It works by allowing any site you visit to give information (like “this person loves sofas”) to your Chrome browser. Then when you visit a site that runs ads, Chrome uses that information to help the site display a sofa ad without the site learning that you love sofas. However, researchers demonstrated this feature of Privacy Sandbox could be exploited to re-identify and track users across websites, partially infer a user’s browsing history, and manipulate the ads that other sites show a user.

  • Ad measurement: This setting allows advertisers to track ad performance by storing data in your browser that's then shared with the advertised sites. For example, after you see an ad for shoes, whenever you visit that shoe site it’ll get information about the time of day the ad was shown and where the ad was displayed. Unfortunately, Google allows advertisers to include a unique ID with this data. So if you interact with multiple ads from the same advertiser around the web, this ID can help an advertiser build a profile of your browsing habits.

Why Privacy Badger Opts Users Out of Privacy Sandbox

Privacy Badger is committed to protecting you from online tracking. Despite being billed as a privacy feature, Privacy Sandbox protects Google’s bottom line at the expense of your privacy. Nearly 80% of Google’s revenue comes from online advertising. By building ad tracking into your Chrome browser, Privacy Sandbox gives Google even more control of the advertising ecosystem than it already has. Yet again, Google is rewriting the rules for the internet in a way that benefits itself first.

Researchers and regulators have already found that Privacy Sandbox “fails to meet its own privacy goals.” In a draft report leaked to the Wall Street Journal, the UK’s privacy regulator noted that Privacy Sandbox could be exploited to identify anonymous users and that companies will likely use it to continue tracking users across sites. Likewise, after researchers told Google about 12 attacks they conducted on a key feature of Privacy Sandbox prior to its public release, Google forged ahead and released the feature after mitigating only one of those attacks.

Privacy Sandbox offers some privacy improvements over third-party cookies. But it reinforces Google’s commitment to behavioral advertising, something we’ve been advocating against for years. Behavioral advertising incentivizes online actors to collect as much of our information as possible. This can lead to a range of harms, like bad actors buying your sensitive information and predatory ads targeting vulnerable populations.

Your browser shouldn’t put advertisers' interests above yours. As Google turns your browser into an advertising agent, Privacy Badger will put your privacy first.

What You Can Do Now

If you don’t already have Privacy Badger, install it now to automatically opt out of Privacy Sandbox and the broader ecosystem of online tracking. Already have Privacy Badger? You’re all set! And of course, don’t hesitate to spread the word to friends and family you want to protect from invasive online tracking. With your help, Privacy Badger will keep fighting to end online tracking and build a safer internet for all.

Lena Cohen