Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

19 hours 58 minutes ago

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo.  

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

David Greene

Why Privacy Badger Opts You Out of Google’s “Privacy Sandbox”

23 hours 40 minutes ago

Update July 22, 2024: Shortly after we published this post, Google announced it's no longer deprecating third-party cookies in Chrome. We've updated this blog to note the news.

The latest update of Privacy Badger opts users out of ad tracking through Google’s “Privacy Sandbox.” 

Privacy Sandbox is Google’s way of letting advertisers keep targeting ads based on your online behavior without using third-party cookies. Third-party cookies were once the most common form of online tracking technology, but major browsers, like Safari and Firefox, started blocking them several years ago. After pledging to eventually do the same for Chrome in 2020, and after several delays, today Google backtracked on its privacy promise, announcing that third-party cookies are here to stay. Notably, Google Chrome continues to lag behind other browsers in terms of default protections against online tracking.

Privacy Sandbox might be less invasive than third-party cookies, but that doesn’t mean it’s good for your privacy. Instead of eliminating online tracking, Privacy Sandbox simply shifts control of online tracking from third-party trackers to Google. With Privacy Sandbox, tracking will be done by your Chrome browser itself, which shares insights gleaned from your browsing habits with different websites and advertisers. Despite sounding like a feature that protects your privacy, Privacy Sandbox ultimately protects Google's advertising business.

How did Google get users to go along with this? In 2023, Chrome users received a pop-up about “Enhanced ad privacy in Chrome.” In the U.S., if you clicked the “Got it” button to make the pop-up go away, Privacy Sandbox remained enabled for you by default. Users could opt out by changing three settings in Chrome. But first, they had to realize that "Enhanced ad privacy" actually enabled a new form of ad tracking.

You shouldn't have to read between the lines of Google’s privacy-washing language to protect your privacy. Privacy Badger will do this for you!

Three Privacy Sandbox features that Privacy Badger disables for you

If you use Google Chrome, Privacy Badger will update three different settings that constitute Privacy Sandbox:

  • Ad topics: This setting allows Google to generate a list of topics you’re interested in based on the websites you visit. Any site you visit can ask Chrome what topics you’re supposedly into, then display an ad accordingly. Some of the potential topics–like “Student Loans & College Financing”, “Credit Reporting & Monitoring”, and “Unwanted Body & Facial Hair Removal”–could serve as proxies for sensitive financial or health information, potentially enabling predatory ad targeting. In an attempt to prevent advertisers from identifying you, your topics roll over each week and Chrome includes a random topic 5% of the time. However, researchers found that Privacy Sandbox topics could be used to re-identify users across websites. Using 1,207 people’s real browsing histories, researchers showed that as few as three observations of a person’s “ad topics” was enough to identify 60% of users across different websites.

  • Site-suggested ads: This setting enables "remarketing" or "retargeting," which is the reason you’re constantly seeing ads for things you just shopped for online. It works by allowing any site you visit to give information (like “this person loves sofas”) to your Chrome browser. Then when you visit a site that runs ads, Chrome uses that information to help the site display a sofa ad without the site learning that you love sofas. However, researchers demonstrated this feature of Privacy Sandbox could be exploited to re-identify and track users across websites, partially infer a user’s browsing history, and manipulate the ads that other sites show a user.

  • Ad measurement: This setting allows advertisers to track ad performance by storing data in your browser that's then shared with the advertised sites. For example, after you see an ad for shoes, whenever you visit that shoe site it’ll get information about the time of day the ad was shown and where the ad was displayed. Unfortunately, Google allows advertisers to include a unique ID with this data. So if you interact with multiple ads from the same advertiser around the web, this ID can help an advertiser build a profile of your browsing habits.

Why Privacy Badger opts users out of Privacy Sandbox

Privacy Badger is committed to protecting you from online tracking. Despite being billed as a privacy feature, Privacy Sandbox protects Google’s bottom line at the expense of your privacy. Nearly 80% of Google’s revenue comes from online advertising. By building ad tracking into your Chrome browser, Privacy Sandbox gives Google even more control of the advertising ecosystem than it already has. Yet again, Google is rewriting the rules for the internet in a way that benefits itself first.

Researchers and regulators have already found that Privacy Sandbox “fails to meet its own privacy goals.” In a draft report leaked to the Wall Street Journal, the UK’s privacy regulator noted that Privacy Sandbox could be exploited to identify anonymous users and that companies will likely use it to continue tracking users across sites. Likewise, after researchers told Google about 12 attacks they conducted on a key feature of Privacy Sandbox prior to its public release, Google forged ahead and released the feature after mitigating only one of those attacks.

Privacy Sandbox offers some privacy improvements over third-party cookies. But it reinforces Google’s commitment to behavioral advertising, something we’ve been advocating against for years. Behavioral advertising incentivizes online actors to collect as much of our information as possible. This can lead to a range of harms, like bad actors buying your sensitive information and predatory ads targeting vulnerable populations.

Your browser shouldn’t put advertisers' interests above yours. As Google turns your browser into an advertising agent, Privacy Badger will put your privacy first.

What you can do now

If you don’t already have Privacy Badger, install it now to automatically opt out of Privacy Sandbox and the broader ecosystem of online tracking. Already have Privacy Badger? You’re all set! And of course, don’t hesitate to spread the word to friends and family you want to protect from invasive online tracking. With your help, Privacy Badger will keep fighting to end online tracking and build a safer internet for all. 



Lena Cohen

Media Briefing: EFF, Partners Warn UN Member States Are Poised to Approve Dangerous International Surveillance Treaty

1 day 10 hours ago
Countries That Believe in Rule of Law Must Push Back on Draft That Expands Spying Powers, Benefiting Authoritarian Regimes

SAN FRANCISCO—On Wednesday, July 24, at 11:00 am Eastern Time (8:00 am Pacific Time, 5:00 pm CET), experts from Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media will brief reporters about the imminent adoption of a global surveillance treaty that threatens human rights around the world, potentially paving the way for a new era of transnational repression.

The virtual briefing will update members of the media ahead of the United Nations’ concluding session of treaty negotiations, scheduled for July 29-August 9 in New York, to possibly finalize and adopt what started out as a treaty to combat cybercrime.

Despite repeated warnings and recommendations by human rights organizations, journalism and industry groups, cybersecurity experts, and digital rights defenders to add human rights safeguards and rein in the treaty’s broad scope and expansive surveillance powers, UN Member States are expected to adopt the Russian-backed, deeply flawed draft.

The experts will discuss the draft treaty in terms of shifts in geopolitical power, abuse of cybercrime laws, and challenges posed by the rising influence of Russia and China. A question-and-answer session will follow speaker presentations.  

WHAT:
Virtual media briefing on UN surveillance treaty

HOW:
To join the news conference remotely, please register from the following link to receive the webinar ID and password:
https://eff.zoom.us/meeting/register/tZwkd-GsrzoiH9Jt3gsl2CJ55Xv0hBDguxW5

SPEAKERS:
Tirana Hassan, Executive Director, Human Rights Watch
Paloma Lara-Castro, Public Policy Coordinator, Derechos Digitales
Khadija Patel, Journalist in Residence, International Fund for Public Interest Media
Katitza Rodriguez, Policy Director for Global Policy, EFF
Moderator: Raman Jit Singh Chima, Global Cybersecurity Lead and Senior International Counsel, Access Now

WHEN:
Wednesday, July 24, at 11:00 am Eastern Time, 8:00 am Pacific Time, 5:00 pm CET

For EFF’s submissions and Coalition Letters to UN Ad Hoc Committee overseeing treaty negotiations:
https://www.eff.org/pages/submissions#main-content

Contact:  KarenGulloSenior Writer for Free Speech and Privacykaren@eff.org DeborahBrownSenior Researcher and Advocate on Technology and Rights, Human Rights Watchbrownd@hrw.org CatalinaBallacatalina.balla@derechosdigitales.org
Karen Gullo

EFF Tells Minnesota Supreme Court to Strike Down Geofence Warrant As Fourth Circuit Court of Appeals Takes the Wrong Turn

3 days 22 hours ago

We haven’t seen the end of invasive geofence warrants just yet, despite Google’s big announcement late last year that it was fundamentally changing how it collects location data. Today, EFF is filing an amicus brief in the Minnesota Supreme Court in State v. Contreras-Sanchez, involving a warrant that directed Google to turn over an entire month of location data in response to a geofence warrant. Our brief argues that warrant violates the Fourth Amendment and Minnesota’s state constitution.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. This creates a high risk of turning suspicion on innocent people for crimes they didn’t commit and can reveal sensitive and private information about where individuals have traveled in the past. We’ve seen a recent flurry of court cases involving geofence warrants, and these courts’ rulings will set important Fourth Amendment precedent not just in geofence cases, but other investigations involving similar “reverse warrants” such as users’ keyword searches on search engines.

In Contreras-Sanchez, police discovered a dead body on the side of a rural roadway. They did not know when the body was disposed of and had few leads, so they sought a warrant directing Google to turn over location data for the area around the site for the previous month. Notably, Google responded that turning over the entire monthlong dataset would be too “cumbersome,” even though it covered only a relatively sparsely populated area. Instead, following the now-familiar “three-step” process for geofence warrants, Google provided police with location data corresponding to twelve devices that had entered the area over a single week period. Police focused in on one device, then sought identifying information on that device, leading them to the defendant.

EFF’s brief, filed along with the National Association of Criminal Defense Lawyers and the Minnesota Association of Criminal Defense Lawyers, argues that the geofence warrant acted as a “general warrant” akin to the practices of the British agents in Colonial America who were authorized to go house by house, searching for smuggled goods and evidence of seditious publications. As we write in the brief:

This general warrant allowed law enforcement to go Google account by Google account, searching each user’s private location data for evidence of an alleged crime. The same concerns that animated staunch objection to general warrants in the past are equally relevant to geofence warrants today; these warrants lack individualized suspicion, allow for unbridled officer discretion, and impact the privacy rights of countless innocent individuals. And, like the eighteenth-century writs of assistance that inspired the Fourth Amendment’s drafters, geofence warrants are especially pernicious because they also have the potential to affect fundamental rights including freedom of speech, association, and bodily autonomy. Neither the Fourth Amendment, nor Article 1, Section 10 of the Minnesota Constitution tolerate a warrant of this breadth.

Federal appeals court makes a serious misstep on geofence warrants

Meanwhile, in the leading federal geofence case, United States v. Chatrie, the federal Court of Appeals for the Fourth Circuit issued a seriously misguided opinion earlier this month, holding that a geofence warrant covering a busy area around a bank robbery for two hours wasn’t even a Fourth Amendment search at all—meaning that the police wouldn’t necessarily need a warrant to get access to all of this sensitive location data. The two-judge majority opinion effectively ignores the impact of the U.S. Supreme Court’s landmark Fourth Amendment location data case, Carpenter v. United States, and similarly tries to distinguish the Fourth Circuit’s own important precedent in Leaders of a Beautiful Struggle v. Baltimore Police Department. In the majority’s view, in order to be a search protected by the Fourth Amendment, the government must collect a significant amount of location data over a long period of time, and the two-hour period at issue in Chatrie simply wasn’t long enough to interfere with individuals’ reasonable expectation of privacy in the “whole of their physical movements” the way longer surveillance was in Carpenter and Leaders.

But in a scathing, 70-plus page dissenting opinion, Judge Wynn dismantled these arguments, showing that Carpenter requires courts to look beyond formulaic applications of precedent and examine the actual character of the surveillance at issue. On nearly every metric, geofence warrants have the capacity to reveal just as, if not more, private and intimate associations than the tracking at issue in Carpenter. What’s more, Judge Wynn’s dissent demonstrated what we’ve argued in geofence cases across the country: These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Chatrie’s lawyers are petitioning the entire Fourth Circuit to review the case, and we’re hopeful that the Chatrie panel opinion will be overturned by the full court en banc. We’ll be filing another amicus brief supporting Chatrie’s petition. Stay tuned for that and for the ruling from the Minnesota Supreme Court in Contreras-Sanchez

Related Cases: Carpenter v. United States
Andrew Crocker

EFF, International Partners Appeal to EU Delegates to Help Fix Flaws in Draft UN Cybercrime Treaty That Can Undermine EU's Data Protection Framework

5 days 14 hours ago

With the final negotiating session to approve the UN Cybercrime Treaty just days away, EFF and 21 international civil society organizations today urgently called on delegates from EU states and the European Commission to push back on the draft convention's many flaws, which include an excessively broad scope that will grant intrusive surveillance powers without robust human rights and data protection safeguards.

The time is now to demand changes in the text to narrow the treaty's scope, limit surveillance powers, and spell out data protection principles. Without these fixes, the draft treaty stands to give governments' abusive practices the veneer of international legitimacy and should be rejected.

Letter below:

Urgent Appeal to Address Critical Flaws in the Latest Draft of the UN Cybercrime Convention


Ahead of the reconvened concluding session of the United Nations (UN) Ad Hoc Committee on Cybercrime (AHC) in New York later this month, we, the undersigned organizations, wish to urgently draw your attention to the persistent critical flaws in the latest draft of the UN cybercrime convention (hereinafter Cybercrime Convention or the Convention).

Despite the recent modifications, we continue to share profound concerns regarding the persistent shortcomings of the present draft and we urge member states to not sign the Convention in its current form.

Key concerns and proposals for remedy:
  1. Overly Broad Scope and Legal Uncertainty:
  • The draft Convention’s scope remains excessively broad, including cyber-enabled offenses and other content-related crimes. The proposed title of the Convention and the introduction of the new Article 4 – with its open-ended reference to “offenses established in accordance with other United Nations conventions and protocols” – creates significant legal uncertainty and expands the scope to an indefinite list of possible crimes to be determined only in the future. This ambiguity risks criminalizing legitimate online expression, having a chilling effect detrimental to the rule of law. We continue to recommend narrowing the Convention’s scope to clearly defined, already existing cyber-dependent crimes only, to facilitate its coherent application, ensure legal certainty and foreseeability and minimize potential abuse.
  • The draft Convention in Article 18 lacks clarity concerning the liability of online platforms for offenses committed by their users. The current draft of the Article lacks the requirement of intentional participation in offenses established in accordance with the Convention, thereby also contradicting Article 19 which does require intent. This poses the risk that online intermediaries could be held liable for information disseminated by their users, even without actual knowledge or awareness of the illegal nature of the content (as set out in the EU Digital Services Act), which will incentivise overly broad content moderation efforts by platforms to the detriment of freedom of expression. Furthermore, the wording is much broader (“for participation”) than the Budapest Convention (“committed for the cooperation’s benefit”) and would merit clarification along the lines of paragraph 125 of the Council of Europe Explanatory Report to the Budapest Convention
  • The proposal in the revised draft resolution to elaborate a draft protocol supplementary to the Convention represents a further push to expand the scope of offenses, risking the creation of a limitlessly expanding, increasingly punitive framework.
  1. Insufficient Protection for Good-Faith Actors:
  • The draft Convention fails to incorporate language sufficient to protect good-faith actors, such as security researchers (irrespective of whether it concerns the authorized testing or protection of an information and communications technology system), whistleblowers, activists, and journalists, from excessive criminalization. It is crucial that the mens rea element in the provisions relating to cyber-dependent crimes includes references to criminal intent and harm caused.
  1. Lack of Specific Human Rights Safeguards:
  • Article 6 fails to include specific human rights safeguards – as proposed by civil society organizations and the UN High Commissioner for Human Rights – to ensure a common understanding among Member States and to facilitate the application of the treaty without unlawful limitation of human rights or fundamental freedoms. These safeguards should be: 
    • applicable to the entire treaty to ensure that cybercrime efforts provide adequate protection for human rights;
    • be in accordance with the principles of legality, necessity, and proportionality, non-discrimination, and legitimate purpose;
    • incorporate the right to privacy among the human rights specified;
    • address the lack of effective gender mainstreaming to ensure the Convention does not undermine human rights on the basis of gender.
  1. Procedural Measures and Law Enforcement:
  • The Convention should limit the scope of procedural measures to the investigation of the criminal offenses set out in the Convention, in line with point 1 above.
  • In order to facilitate their application and – in light of their intrusiveness – to minimize the potential for abuse, this chapter of the Convention should incorporate the following minimal conditions and safeguards as established under international human rights law. Specifically, the following should be included in Article 24:
    • the principles of legality, necessity, proportionality, non-discrimination and legitimate purpose;
    • prior independent (judicial) authorization of surveillance measures and monitoring throughout their application;
    • adequate notification of the individuals concerned once it no longer jeopardizes investigations;
    • and regular reports, including statistical data on the use of such measures.
  • Articles 28/4, 29, and 30 should be deleted, as they include excessive surveillance measures that open the door for interference with privacy without sufficient safeguards as well as potentially undermining cybersecurity and encryption.
  1. International Cooperation:
  • The Convention should limit the scope of international cooperation solely to the crimes set out in the Convention itself to avoid misuse (as per point 1 above.) Information sharing for law enforcement cooperation should be limited to specific criminal investigations with explicit data protection and human rights safeguards.
  • Article 40 requires “the widest measure of mutual legal assistance” for offenses established in accordance with the Convention as well as any serious offense under the domestic law of the requesting State. Specifically, where no treaty on mutual legal assistance applies between State Parties, paragraphs 8 to 31 establish extensive rules on obligations for mutual legal assistance with any State Party with generally insufficient human rights safeguards and grounds for refusal. For example, paragraph 22 sets a high bar of ”substantial grounds for believing” for the requested State to refuse assistance.
  • When State Parties cannot transfer personal data in compliance with their applicable laws, such as the EU data protection framework, the conflicting obligation in Article 40 to afford the requesting State “the widest measure of mutual legal assistance” may unduly incentivize the transfer of the personal data subject to appropriate conditions under Article 36(1)(b), e.g. through derogations for specific situations in Article 38 of the EU Law Enforcement Directive. Article 36(1)(c) of the Convention also encourages State Parties to establish bilateral and multilateral agreements to facilitate the transfer of personal data, which creates a further risk of undermining the level of data protection guaranteed by EU law.
  • When personal data is transferred in full compliance with the data protection framework of the requested State, Article 36(2) should be strengthened to include clear, precise, unambiguous and effective standards to protect personal data in the requesting State, and to avoid personal data being further processed and transferred to other States in ways that may violate the fundamental right to privacy and data protection.
Conclusion and Call to Action:

Throughout the negotiation process, we have repeatedly pointed out the risks the treaty in its current form pose to human rights and to global cybersecurity. Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose.

Failing to narrow the scope of the whole treaty to cyber-dependent crimes, to protect the work of security researchers, human rights defenders and other legitimate actors, to strengthen the human rights safeguards, to limit surveillance powers, and to spell out the data protection principles will give governments’ abusive practices a veneer of international legitimacy. It will also make digital communications more vulnerable to those cybercrimes that the Convention is meant to address. Ultimately, if the draft Convention cannot be fixed, it should be rejected. 

With the UN AHC’s concluding session about to resume, we call on the delegations of the Member States of the European Union and the European Commission’s delegation to redouble their efforts to address the highlighted gaps and ensure that the proposed Cybercrime Convention is narrowly focused in its material scope and not used to undermine human rights nor cybersecurity. Absent meaningful changes to address the existing shortcomings, we urge the delegations of EU Member States and the EU Commission to reject the draft Convention and not advance it to the UN General Assembly for adoption.

This statement is supported by the following organizations:

Access Now
Alternatif Bilisim
ARTICLE 19: Global Campaign for Free Expression
Centre for Democracy & Technology Europe
Committee to Protect Journalists
Digitalcourage
Digital Rights Ireland
Digitale Gesellschaft
Electronic Frontier Foundation (EFF)
epicenter.works
European Center for Not-for-Profit Law (ECNL) 
European Digital Rights (EDRi)
Global Partners Digital
International Freedom of Expression Exchange (IFEX)
International Press Institute 
IT-Pol Denmark
KICTANet
Media Policy Institute (Kyrgyzstan)
Privacy International
SHARE Foundation
Vrijschrift.org
World Association of News Publishers (WAN-IFRA)
Zavod Državljan D (Citizen D)





Katitza Rodriguez

Beyond Pride Month: Protecting Digital Identities For LGBTQ+ People

5 days 19 hours ago

The internet provides people space to build communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And for LGBTQ+ individuals, digital spaces enable people that are not yet out to engage with their gender and sexual orientation.

In the age of so much passive surveillance, it can feel daunting if not impossible to strike any kind of privacy online. We can’t blame you for feeling this way, but there’s plenty you can do to keep your information private and secure online. What’s most important is that you think through the specific risks you face and take the right steps to protect against them. 

The first step is to create a security plan. Following that, consider some of the recommended advice below and see which steps fit best for your specific needs:  

  • Use multiple browsers for different use cases. Compartmentalization of sensitive data is key. Since many websites are finicky about the type of browser you’re using, it’s normal to have multiple browsers installed on one device. Designate one for more sensitive activities and configure the settings to have higher privacy.
  • Use a VPN to bypass local censorship, defeat local surveillance, and connect your devices securely to the network of an organization on the other side of the internet. This is extra helpful for accessing pro-LGBTQ+ content from locations that ban access to this material.
  • If your cell phone allows it, hide sensitive apps away from the home screen. Although these apps will still be available on your phone, this hides them into a special folder so that prying eyes are less likely to find them.
  • Separate your digital identities to mitigate the risk of doxxing, as the personal information exposed about you is often found in public places like “people search” sites and social media.
  • Create a security plan for incidents of harassment and threats of violence. Especially if you are a community organizer, activist, or prominent online advocate, you face an increased risk of targeted harassment. Developing a plan of action in these cases is best done well before the threats become credible. It doesn’t have to be perfect; the point is to refer to something you were able to think up clear-headed when not facing a crisis. 
  • Create a plan for backing up images and videos to avoid losing this content in places where governments slow down, disrupt, or shut down the internet, especially during LGBTQ+ events when network disruptions inhibit quick information sharing.
  • Use two-factor authentication where available to make your online accounts more secure by adding a requirement for additional proof (“factors”) alongside a strong password.
  • Obscure people’s faces when posting pictures of protests online (like using tools such as Signal’s in-app camera blur feature) to protect their right to privacy and anonymity, particularly during LGBTQ+ events where this might mean staying alive.
  • Harden security settings in Zoom for large video calls and events, such as enabling security settings and creating a process to remove opportunistic or homophobic people disrupting the call. 
  • Explore protections on your social media accounts, such as switching to private mode, limiting comments, or using tools like blocking users and reporting posts. 

For more information on these topics, visit the following:

Paige Collings

UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

5 days 20 hours ago

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards.

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Background and Current Status of the UN Cybercrime Convention Negotiations

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum.[1] However, in keeping with the principle in multilateral negotiations that “nothing is agreed until everything is agreed,” any provisions of the draft that have already been agreed could potentially be reopened. 

The current text reveals significant disagreements among countries on crucial issues like the convention's scope and human rights protection. Of course the text could also get worse. Just when we thought Member States had removed many concerning crimes, they could reappear. The Ad-Hoc Committee Chair’s General Assembly resolution includes two additional sessions to negotiate not more protections, but the inclusion of more crimes. The resolution calls for “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.” Nevertheless, some countries still expect the latest draft to be adopted.

In this third post, we highlight the dangers of the currently proposed UN Cybercrime Convention's broad definition of "electronic data" and inadequate privacy and data protection safeguards.Together, these create the conditions for severe human rights abuses, transnational repression, and inconsistencies across countries in human rights protections.

A Closer Look to the Definition of Electronic Data

The proposed UN Cybercrime Convention significantly expands state surveillance powers under the guise of combating cybercrime. Chapter IV grants extensive government authority to monitor and access digital systems and data, categorizing data into communications data: subscriber data, traffic data, and content data. But it also makes use of a catch-all category called "electronic data." Article 2(b) defines electronic data as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function."

"Electronic data," is eligible for three surveillance powers: preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28). Unlike the other traditional categories of traffic data, subscriber data and content data, "electronic data" refers to any data stored, processed, or transmitted electronically, regardless of whether it has been communicated to anyone. This includes documents saved on personal computers or notes stored on digital devices. In essence, this means that private unshared thoughts and information are no longer safe. Authorities can compel the preservation, production, or seizure of any electronic data, potentially turning personal devices into spy vectors regardless of whether the information has been communicated.

This is delicate territory, and it deserves careful thought and real protection—many of us now use our devices to keep our most intimate thoughts and ideas, and many of us also use tools like health and fitness tools in ways that we do not intend to share. This includes data stored on devices, such as face scans and smart home device data, if they remain within the device and are not transmitted. Another example could be photos that someone takes on a device but doesn't share with anyone. This category threatens to turn our most private thoughts and actions over to spying governments, both our own and others. 

And the problem is worse when we consider emerging technologies. The sensors in smart devices, AI, and augmented reality glasses, can collect a wide array of highly sensitive data. These sensors can record involuntary physiological reactions to stimuli, including eye movements, facial expressions, and heart rate variations. For example, eye-tracking technology can reveal what captures a user's attention and for how long, which can be used to infer interests, intentions, and even emotional states. Similarly, voice analysis can provide insights into a person's mood based on tone and pitch, while body-worn sensors might detect subtle physical responses that users themselves are unaware of, such as changes in heart rate or perspiration levels.

These types of data are not typically communicated through traditional communication channels like emails or phone calls (which would be categorized as content or traffic data). Instead, they are collected, stored, and processed locally on the device or within the system, fitting the broad definition of "electronic data" as outlined in the draft convention.

Such data likely has been harder to obtain because it may have not been communicated to or possessed by any communications intermediary or system. So it’s an  example of how the broad term "electronic data" increases the kinds (and sensitivity) of information about us that can be targeted by law enforcement through production orders or by search and seizure powers. These emerging technology uses are their own category, but they are most like "content" in communications surveillance, which usually has high protection. “Electronic data” must have equal protection as “content” of communication, and be subject to ironclad data protection safeguards, which the propose treaty fails to provide, as we will explain below.

The Specific Safeguard Problems

Like other powers in the draft convention, the broad powers related to "electronic data" don't come with specific limits to protect fair trial rights. 

Missing Safeguards

For example, many countries' have various kinds of information that is protected by a legal “privilege” against surveillance: attorney-client privilege, the spousal privilege, the priest-penitent privilege, doctor-patient privileges, and many kinds of protections for confidential business information and trade secrets. Many countries, also give additional protections for journalists and their sources. These categories, and more, provide varying degrees of extra requirements before law enforcement may access them using production orders or search-and-seizure powers, as well as various protections after the fact, such as preventing their use in prosecutions or civil actions. 

Similarly, the convention lacks clear safeguards to prevent authorities from compelling individuals to provide evidence against themselves. These omissions raise significant red flags about the potential for abuse and the erosion of fundamental rights when a treaty text involves so many countries with a high disparity of human rights protections.

The lack of specific protections for criminal defense is especially troubling. In many legal systems, defense teams have certain protections to ensure they can effectively represent their clients, including access to exculpatory evidence and the protection of defense strategies from surveillance. However, the draft convention does not explicitly protect these rights, which both misses the chance to require all countries to provide these minimal protections and potentially further undermines the fairness of criminal proceedings and the ability of suspects to mount an effective defense in countries that either don’t provide those protections or where they are not solid and clear.

Even the State “Safeguards” in Article 24 are Grossly Insufficient

Even where the convention’s text discusses “safeguards,” the convention doesn’t actually protect people. The “safeguard” section, Article 24, fails in several obvious ways: 

Dependence on Domestic Law: Article 24(1) makes safeguards contingent on domestic law, which can vary significantly between countries. This can result in inadequate protections in states where domestic laws do not meet high human rights standards. By deferring safeguards to national law, Article 24 weakens these protections, as national laws may not always provide the necessary safeguards. It also means that the treaty doesn’t raise the bar against invasive surveillance, but rather confirms even the lowest protections.

A safeguard that bends to domestic law isn't a safeguard at all if it leaves the door open for abuses and inconsistencies, undermining the protection it's supposed to offer.

Discretionary Safeguards: Article 24(2) uses vague terms like “as appropriate,” allowing states to interpret and apply safeguards selectively. This means that while the surveillance powers in the convention are mandatory, the safeguards are left to each state’s discretion. Countries decide what is “appropriate” for each surveillance power, leading to inconsistent protections and potential weakening of overall safeguards.

Lack of Mandatory Requirements: Essential protections such as prior judicial authorization, transparency, user notification, and the principle of legality, necessity and non-discrimination are not explicitly mandated. Without these mandatory requirements, there is a higher risk of misuse and abuse of surveillance powers.

No Specific Data Protection Principles: As we noted above, the proposed treaty does not include specific safeguards for highly sensitive data, such as biometric or privileged data. This oversight leaves such information vulnerable to misuse.

Inconsistent Application: The discretionary nature of the safeguards can lead to their inconsistent application, exposing vulnerable populations to potential rights violations. Countries might decide that certain safeguards are unnecessary for specific surveillance methods, which the treaty allows, increasing the risk of abuse.

Finally, Article 23(4) of Chapter IV authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). However, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41) do not specifically cite the corresponding Chapter IV powers and so may not be covered by Article 24 safeguards.

Search and Seizure of Stored Electronic Data

The proposed UN Cybercrime Convention significantly expands government surveillance powers, particularly through Article 28, which deals with the search and seizure of electronic data. This provision grants authorities sweeping abilities to search and seize data stored on any computer system, including personal devices, without clear, mandatory privacy and data protection safeguards. This poses a serious threat to privacy and data protection.

Article 28(1) allows authorities to search and seize any “electronic data” in an information and communications technology (ICT) system or data storage medium. It lacks specific restrictions, leaving much to the discretion of national laws. This could lead to significant privacy violations as authorities might access all files and data on a suspect’s personal computer, mobile device, or cloud storage account—all without clear limits on what may be targeted or under what conditions.

Article 28(2) permits authorities to search additional systems if they believe the sought data is accessible from the initially searched system. While judicial authorization should be a requirement to assess the necessity and proportionality of such searches, Article 24 only mandates “appropriate conditions and safeguards” without explicit judicial authorization. In contrast, U.S. law under the Fourth Amendment requires search warrants to specify the place to be searched and the items to be seized—preventing unreasonable searches and seizures.

Article 28(3) empowers authorities to seize or secure electronic data, including making and retaining copies, maintaining its integrity, and rendering it inaccessible or removing it from the system. For publicly accessible data, this takedown process could infringe on free expression rights and should be explicitly subject to free expression standards to prevent abuse.

Article 28(4) requires countries to have laws that allow authorities to compel anyone who knows how a particular computer or device works to provide necessary information to access it. This could include asking a tech expert or an engineer to help unlock a device or explain its security features. This is concerning because it might force people to help law enforcement in ways that could compromise security or reveal confidential information. For example, an engineer could be required to disclose a security flaw that hasn't been fixed, or to provide encryption keys that protect data, which could then be misused. The way it is written, it could be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

Privacy International and EFF strongly recommend Article 28.4 be removed in its entirety. Instead, it has been agreed ad referendum. At least, the drafters must include material in the explanatory memorandum that accompanies the draft Convention to clarify limits to avoid forcing technologists to reveal confidential information or do work on behalf of law enforcement against their will. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices.

In general, production and search and seizure orders might be used to target tech companies' secrets, and require uncompensated labor by technologists and tech companies, not because they are evidence of crime but because they can be used to enhance law enforcement's technical capabilities.

Domestic Expedited Preservation Orders of Electronic Data

Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, and will result in individuals’ data being preserved and available for use in prosecutions far more than needed. It also fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it risks spreading familiar deficiencies in U.S. law worldwide.

Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered.

Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely.

Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations.

The article must limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation.

As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention.

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards— including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency—preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic.

This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the draft convention.

As we explain above, the extremely broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it. 

This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The text of the convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. 

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. 
  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.
Countries Must Protect Human Rights or Reject the Draft Treaty

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

[1] In the context of treaty negotiations, "ad referendum" means that an agreement has been reached by the negotiators, but it is subject to the final approval or ratification by their respective authorities or governments. It signifies that the negotiators have agreed on the text, but the agreement is not yet legally binding until it has been formally accepted by all parties involved.

Katitza Rodriguez

Courts Should Have Jurisdiction over Foreign Companies Collecting Data on Local Residents, EFF Tells Appeals Court

6 days 16 hours ago

This post was written by EFF legal intern Danya Hajjaji. 

Corporations should not be able to collect data from a state’s residents while evading the jurisdiction of that state’s courts, EFF and the UC Berkeley Center for Consumer Law and Economic Justice explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals. 

The case, Briskin v. Shopify, stems from a California resident’s privacy claims against Shopify, Inc. and its subsidiaries, out-of-state companies that process payments for third party ecommerce companies (collectively “Shopify”). The plaintiff alleged that Shopify secretly collected data on the plaintiff and other California consumers while purchasing apparel from an online California-based retailer. Shopify also allegedly tracked the users’ browsing activities across all ecommerce sites that used Shopify’s services. Shopify allegedly compiled that information into comprehensive user profiles, complete with financial “risk scores” that companies could use to block users’ future purchases.  

The Ninth Circuit initially dismissed the lawsuit for lack of personal jurisdiction and ruled that Shopify, an out-of-state defendant, did not have enough contacts with California to be fairly sued in California. 

Personal jurisdiction is designed to protect defendants' due process rights by ensuring that they cannot be hailed into court in jurisdictions that they have little connection to. In the internet context, the Ninth Circuit has previously held that operating a website, plus evidence that the defendant did “something more” to target a jurisdiction, is sufficient for personal jurisdiction.  

The Ninth Circuit originally dismissed Briskin on the grounds that the plaintiff failed to show the defendant did “something more.” It held that violating all users’ privacy was not enough; Shopify would have needed to do something to target Californians in particular.  

The Ninth Circuit granted rehearing en banc, and requested additional briefing on the personal jurisdiction rule that should govern online conduct. 

EFF and the Center for Consumer Law and Economic Justice argued that courts in California can fairly hold out-of-state corporations accountable for privacy violations that involve collecting vast amounts of personal data directly from consumers inside California and using that data to build profiles based in part on their location. To obtain personal data from California consumers, corporations must usually form additional contacts with California as well—including signing contracts within the state and creating California-specific data policies. In our view, Shopify is subject to personal jurisdiction in California because Shopify’s allegedly extensive data collection operations targeted Californians. That it also allegedly collected information from users in other states should not prevent California plaintiffs from having their day in court in their home state.   

In helping the Ninth Circuit develop a sensible test for personal jurisdiction in data privacy cases, EFF hopes to empower plaintiffs to preserve their online privacy rights in their forum of choice without sacrificing existing jurisdictional protections for internet publishers.  

EFF has long worked to ensure that consumer data privacy laws balance rights to privacy and free expression. We hope the Ninth Circuit will adopt our guidelines in structuring a privacy-specific personal jurisdiction rule that is commonsense and constitutionally sound. 

Tori Noble

Victory! EFF Supporters Beat USPTO Proposal To Wreck Patent Reviews

6 days 19 hours ago

The U.S. patent system is broken, particularly when it comes to software patents. At EFF, we’ve been fighting hard for changes that make the system more sensible. Last month, we got a big victory when we defeated a set of rules that would have mangled one of the U.S. Patent and Trademark Office (USPTO)’s most effective systems for kicking out bad patents. 

In 2012, recognizing the entrenched problem of a patent office that spewed out tens of thousands of ridiculous patents every year, Congress created a new system to review patents called “inter partes reviews,” or IPRs. While far from perfect, IPRs have resulted in cancellation of thousands of patent claims that never should have been issued in the first place. 

At EFF, we used the IPR process to crowd-fund a challenge to the Personal Audio “podcasting patent” that tried to extract patent royalty payments from U.S. podcasters. We won that proceeding and our victory was confirmed on appeal.

It’s no surprise that big patent owners and patent trolls have been trying to wreck the IPR system for years. They’ve tried, and failed, to get federal courts to dismantle IPRs. They’ve tried, and failed, to push legislation that would break the IPR system. And last year, they found a new way to attack IPRs—by convincing the USPTO to propose a set of rules that would have sharply limited the public’s right to challenge bad patents. 

That’s when EFF and our supporters knew we had to fight back. Nearly one thousand EFF supporters filed comments with the USPTO using our suggested language, and hundreds more of you wrote your own comments. 

Today, we say thank you to everyone who took the time to speak out. Your voice does matter. In fact, the USPTO withdrew all three of the terrible proposals that we focused on. 

Our Victory to Keep Public Access To Patent Challenges 

The original rules would have greatly increased expanded what are called “discretionary denials,” enabling judges at the USPTO to throw out an IPR petition without adequately considering the merits of the petition. While we would like to see even fewer discretionary denials, defeating the proposed limitations patent challenges is a significant win.

First, the original rules would have stopped “certain for-profit entities” from using the IPR system altogether. While EFF is a non-profit, for-profit companies can and should be allowed to play a role in getting wrongly granted patents out of the system. Membership-based patent defense organizations like RPX or Unified Patents can allow small companies to band together and limit their costs while defending themselves against invalid patents. And non-profits like the Linux Foundation, who joined us in fighting against these wrongheaded proposed rules, can work together with professional patent defense groups to file more IPRs. 

EFF and our supporters wrote in opposition to this rule change—and it’s out. 

Second, the original rules would have exempted “micro and small entities” from patent reviews altogether. This exemption would have applied to many of the types of companies we call “patent trolls”—that is, companies whose business is simply demanding license fees for patents, rather than offering actual products or services. Those companies, specially designed to threaten litigation, would have easily qualified as “small entities” and avoided having their patents challenged. Patent trolls, which bully real small companies and software developers into paying unwarranted settlement fees, aren’t the kind of “small business” that should be getting special exemptions from patent review. 

EFF and our supporters opposed this exemption, and it’s out of the final rulemaking. 

Third, last year’s proposal would have allowed for IPR petitions to be kicked out if they had a “parallel proceeding”—in other words, a similar patent dispute—in district court. This was a wholly improper reason to not consider IPRs, especially since district court evidence rules are different than those in place for an IPR. 

EFF and our supporters opposed these new limitations, and they’re out. 

While the new rules aren’t perfect, they’re greatly improved. We would still prefer more IPRs rather than fewer, and don’t want to see IPRs that otherwise meet the rules get kicked out of the review process. But even there, the new revised rules have big improvements. For instance, they allow for separate briefing of discretionary denials, so that people and companies seeking IPR review can keep their focus on the merits of their petition. 

Additional reading: 

Joe Mullin

Modern Cars Can Be Tracking Nightmares. Abuse Survivors Need Real Solutions.

6 days 21 hours ago

The amount of data modern cars collect is a serious privacy concern for all of us. But in an abusive situation, tracking can be a nightmare.

As a New York Times article outlined, modern cars are often connected to apps that show a user a wide range of information about a vehicle, including real-time location data, footage from cameras showing the inside and outside of the car, and sometimes the ability to control the vehicle remotely from their mobile device. These features can be useful, but abusers often turn these conveniences into tools to harass and control their victims—or even to locate or spy on them once they've fled their abusers.

California is currently considering three bills intended to help domestic abuse survivors endangered by vehicle tracking. Unfortunately, despite the concerns of advocates who work directly on tech-enabled abuse, these proposals are moving in the wrong direction. These bills intended to protect survivors are instead being amended in ways that open them to additional risks. We call on the legislature to return to previous language that truly helps people disable location-tracking in their vehicles without giving abusers new tools.

We know abusers are happy to lie and exploit whatever they can to further their abuse, including laws and services meant to help survivors.

Each of the bills seeks to address tech-enabled abuse in different ways. The first, S.B. 1394 by CA State Sen. David Min (Irvine), earned EFF's support when it was introduced. This bill was drafted with considerable input from experts in tech-enabled abuse at The University of California, Irvine. We feel its language best serves the needs of survivors in a wide range of scenarios without creating new avenues of stalking and harassment for the abuser to exploit. As introduced, it would require car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor must prove the vehicle is theirs to use, even if their name is not necessarily on the loan or title. They could do this through documentation such as a court order, police report, or marriage separation agreement. S.B. 1000 by CA State Sen. Angelique Ashby (Sacramento) would have applied a similar framework to allow survivors to make requests to cut remote access to vehicles and other smart devices.

In contrast, A.B. 3139 introduced by Asm. Dr. Akilah Weber (La Mesa) takes a different approach. Rather than have people submit requests first and cut access later, this bill would require car manufacturers to terminate access immediately, and only requiring some follow-up documentation up to seven days after the request. Unfortunately, both S.B. 1394 and S.B. 1000 have now been amended to adopt this "act first, ask questions later" framework.

The changes to these bills are intended to make it easier for people in desperate situations to get away quickly. Yet, for most people, we believe the risks of A.B. 3139's approach outweigh the benefits. EFF's experience working with victims of tech-enabled abuse instead suggests that these changes are bad for survivors—something we've already said in official comments to the Federal Communications Commission.

Why This Doesn't Work for Survivors

EFF has two main concerns with the approach from A.B. 3139. First, the bill sets a low bar for verifying an abusive situation, including simply allowing a statement from the person filing the request. Second, the bill requires a way to turn tracking off immediately without any verification. Why are these problems?

Imagine you have recently left an abusive relationship. You own your car, but your former partner decides to seek revenge for your leaving and calls the car manufacturer to file a false report that removes your access to your car. In cases where both the survivor and abuser have access to the car's account—a common scenario—the abuser could even kick the survivor off a car app account, and then use the app to harass and stalk the survivor remotely. Under A.B. 3139's language, it would be easy for an abuser to make a false statement, under penalty of perjury—to "verify" that the survivor is the perpetrator of abuse. Depending on a car app’s capabilities, that false claim could mean that, for up to a week, a survivor may be unable to start or access their own vehicle. We know abusers are happy to lie and exploit whatever they can to further their abuse, including laws and services meant to help survivors. It will be trivial for an abuser—who is already committing a crime and unlikely to fear a perjury charge—to file a false request to cut someone off from their car.

It's true that other domestic abuse laws EFF has worked on allow for this kind of self-attestation. This includes the Safe Connections Act, which allows survivors to peel their phone more easily off of a family plan. However, this is the wrong approach for vehicles. Access to a phone plan is significantly different from access to a car, particularly when remote services allow you to control a vehicle. While inconvenient and expensive, it is much easier to replace a phone or a phone plan than a car if your abuser locks you out. The same solution doesn't fit both problems. You need proof to make the decision to cut access to something as crucial to someone's life as their vehicle.

Second, the language added to these bills requires it be possible for anyone in a car to immediately disconnect it from connected services. Specifically, A.B. 3139 says that the method to disable tracking must be "prominently located and easy to use and shall not require access to a remote, online application." That means it must essentially be at the push of a button. That raises serious potential for misuse. Any person in the car may intentionally or accidentally disable tracking, whether they're a kid pushing buttons for fun, a rideshare passenger, or a car thief. Even more troubling, an abuser could cut access to the app’s ability to track a car and kidnap a survivor or their children. If past is prologue, in many cases, abusers will twist this "protection" to their own ends.

The combination of immediate action and self-attestation is helpful for survivors in one particular scenario—a survivor who has no documentation of their abuse, who needs to get away immediately in a car owned by their abuser. But it opens up many new avenues of stalking, harassment, and other forms of abuse for survivors. EFF has loudly called for bills that empower abuse survivors to take control away from their abusers, particularly by being able to disable tracking—but this is not the right way to do it. We urge the legislature to pass bills with the processes originally outlined in S.B. 1394 and S.B. 1000 and provide survivors with real solutions to address unwanted tracking.

Hayley Tsukayama

Detroit Takes Important Step in Curbing the Harms of Face Recognition Technology

1 week ago

In a first-of-its-kind agreement, the Detroit Police Department recently agreed to adopt strict limits on its officers’ use of face recognition technology as part of a settlement in a lawsuit brought by a victim of this faulty technology.  

Robert Williams, a Black resident of a Detroit suburb, filed suit against the Detroit Police Department after officers arrested him at his home in front of his wife, daughters, and neighbors for a crime he did not commit. After a shoplifting incident at a watch store, police used a blurry still taken from surveillance footage and ran it through face recognition technology—which incorrectly identified Williams as the perpetrator. 

Under the terms of the agreement, the Detroit Police can no longer substitute face recognition technology (FRT) for reliable policework. Simply put: Face recognition matches can no longer be the only evidence police use to justify an arrest. 

FRT creates an “imprint” from an image of a face, then compares that imprint to other images—often a law enforcement database made up of mugshots, driver’s license images, or even images scraped from the internet. The technology itself is fraught with issues, including that it is highly inaccurate for certain demographics, particularly Black men and women. The Detroit Police Department makes face recognition queries using DataWorks Plus software to the Statewide Network of Agency Photos, or (SNAP), a database operated by the Michigan State Police. According to data obtained by EFF through a public records request, roughly 580 local, state, and federal agencies and their sub-divisions have desktop access to SNAP.  

Among other achievements, the settlement agreement’s new rules bar arrests based solely on face recognition results, or the results of the ensuing photo lineup—a common police procedure in which a witness is asked to identify the perpetrator from a “lineup” of images—conducted immediately after FRT identifies a suspect. This dangerous simplification has meant that on partial matches—combined with other unreliable evidence, such as eyewitness identifications—police have ended up arresting people who clearly could not have committed the crime. Such was the case with Robert Williams, who had been out of the state on the day the crime occurred. Because face recognition finds people who look similar to the suspect, putting that person directly into a police lineup will likely result in the witness picking the person who looks most like the suspect they saw—all but ensuring the person falsely accused by technology will receive the bulk of the suspicion.  

Under Detroit’s new rules, if police use face recognition technology at all during any investigation, they must record detailed information about their use of the technology, such as photo quality and the number of photos of the same suspect not identified by FRT. If charges are ever filed as a result of the investigation, prosecutors and defense attorneys will have access to the information about any uses of FRT in the case.  

The Detroit Police Department’s new face recognition rules are among the strictest restrictions adopted anywhere in the country—short of the full bans on the technology passed by San Francisco, Boston, and at least 15 other municipalities. Detroit’s new regulations are an important step in the right direction, but only a full ban on government use of face recognition can fully protect against this technology’s many dangers. FRT jeopardizes every person’s right to protest government misconduct free from retribution and reprisals for exercising their right to free speech. Giving police the ability to fly a drone over a protest and identify every protester undermines every person’s right to freely associate with dissenting groups or criticize government officials without fear of retaliation from those in power. 

Moreover, FRT undermines racial justice and threatens civil rights. Study after study after study has found that these tools cannot reliably identify people of color.  According to Detroit’s own data, roughly 97 percent of queries in 2023 involved Black suspects; when asked during a public meeting in 2020, then-police Chief James Craig estimated the technology would misidentify people 96 percent of the time. 

Williams was one of the first victims of this technology—but he was by no means the last. In Detroit alone, police wrongfully arrested at least two other people based on erroneous face recognition matches: Porcha Woodruff, a pregnant Black woman, and Michael Oliver, a Black man who lost his job due to his arrest.  

Many other innocent people have been arrested elsewhere, and in some cases, have served jail time as a result. The consequences can be life-altering; one man was sexually assaulted while incarcerated due a FRT misidentification. Police and the government have proven time and time again they cannot be trusted to use this technology responsibly. Although many departments already acknowledge that FRT results alone cannot justify an arrest, that is cold comfort to people like Williams, who are still being harmed despite the reassurances police give the public.  

It is time to take FRT out of law enforcement’s hands altogether. 

Tori Noble

EFF to FCC: SS7 is Vulnerable, and Telecoms Must Acknowledge That

1 week ago

It’s unlikely you’ve heard of Signaling System 7 (SS7), but every phone network in the world is connected to it, and if you have ever roamed networks internationally or sent an SMS message overseas you have used it. SS7 is a set of telecommunication protocols that cellular network operators use to exchange information and route phone calls, text messages, and other communications between each other on 2G and 3G networks (4G and 5G networks instead use the Diameter signaling system). When a person travels outside their home network's coverage area (roaming), and uses their phone on a 2G or 3G network, SS7 plays a crucial role in registering the phone to the network and routing their communications to the right destination. On May 28, 2024, EFF submitted comments to the Federal Communications Commision demanding investigation of SS7 and Diameter security and transparency into how the telecoms handle the security of these networks.

What Is SS7, and Why Does It Matter?

When you roam onto different 2G or 3G networks, or send an SMS message internationally the SS7 system works behind the scenes to seamlessly route your calls and SMS messages. SS7 identifies the country code, locates the specific cell tower that your phone is using, and facilitates the connection. This intricate process involves multiple networks and enables you to communicate across borders, making international roaming and text messages possible. But even if you don’t roam internationally, send SMS messages, or use legacy 2G/3G networks, you may still be vulnerable to SS7 attacks because most telecommunications providers are still connected to it to support international roaming, even if they have turned off their own 2G and 3G networks. SS7 was not built with any security protocols, such as authentication or encryption, and has been exploited by governments, cyber mercenaries, and criminals to intercept and read SMS messages. As a result, many network operators have placed firewalls in order to protect users. However, there are no mandates or security requirements placed on the operators, so there is no mechanism to ensure that the public is safe.

Many companies treat your ownership of your phone number as a primary security authentication mechanism, or secondary through SMS two-factor authentication. An attacker could use SS7 attacks to intercept text messages and then gain access to your bank account, medical records, and other important accounts. Nefarious actors can also use SS7 attacks to track a target’s precise location anywhere in the world

These vulnerabilities make SS7 a public safety issue. EFF strongly believes that it is in the best interest of the public for telecommunications companies to secure their SS7 networks and publicly audit them, while also moving to more secure technologies as soon as possible.

Why SS7 Isn’t Secure

SS7 was standardized in the late 1970s and early 1980s, at a time when communication relied primarily on landline phones. During that era, the telecommunications industry was predominantly controlled by corporate monopolies. Because the large telecoms all trusted each other there was no incentive to focus on the security of the network. SS7 was developed when modern encryption and authentication methods were not in widespread use. 

In the 1990s and 2000s new protocols were introduced by the European Telecommunication Standards Institute (ETSI) and the telecom standards bodies to support mobile phones with services they need, such as roaming, SMS, and data. However, security was still not a concern at the time. As a result, SS7 presents significant cybersecurity vulnerabilities that demand our attention. 

SS7 can be accessed through telecommunications companies and roaming hubs. To access SS7, companies (or nefarious actors) must have a “Global Title,” which is a phone number that uniquely identifies a piece of equipment on the SS7 network. Each phone company that runs its own network has multiple global titles. Some telecommunications companies lease their global titles, which is how malicious actors gain access to the SS7 network. 

Concerns about potential SS7 exploits are primarily discussed within the mobile security industry and are not given much attention in broader discussions about communication security. Currently, there is no way for end users to detect SS7 exploitation. The best way to safeguard against SS7 exploitation is for telecoms to use firewalls and other security measures. 

With the rapid expansion of the mobile industry, there is no transparency around any efforts to secure our communications. The fact that any government can potentially access data through SS7 without encountering significant security obstacles poses a significant risk to dissenting voices, particularly under authoritarian regimes.

Some people in the telecommunications industry argue that SS7 exploits are mainly a concern for 2G and 3G networks. It’s true that 4G and 5G don’t use SS7—they use the Diameter protocol—but Diameter has many of the same security concerns as SS7, such as location tracking. What’s more, as soon as you roam onto a 3G or 2G network, or if you are communicating with someone on an older network, your communications once again go over SS7. 

FCC Requests Comments on SS7 Security 

Recently, the FCC issued a request for comments on the security of SS7 and Diameter networks within the U.S. The FCC asked whether the security efforts of telecoms were working, and whether auditing or intervention was needed. The three large US telecoms (Verizon, T-Mobile, and AT&T) and their industry lobbying group (CTIA) all responded with comments stating that their SS7 and Diameter firewalls were working perfectly, and that there was no need to audit the phone companies’ security measures or force them to report specific success rates to the government. However, one dissenting comment came from Cybersecurity and Infrastructure Security Agency (CISA) employee Kevin Briggs. 

We found the comments by Briggs, CISA’s top expert on telecom network vulnerabilities, to be concerning and compelling. Briggs believes that there have been successful, unauthorized attempts to access network user location data from U.S. providers using SS7 and Diameter exploits. He provides two examples of reports involving specific persons that he had seen: the tracking of a person in the United States using Provide Subscriber Information (PSI) exploitation (March 2022); and the tracking of three subscribers in the United States using Send Routing Information (SRI) packets (April 2022).  

This is consistent with reporting by Gary Miller and Citizen Lab in 2023, where they state: “we also observed numerous requests sent from networks in Saudi Arabia to geolocate the phones of Saudi users as they were traveling in the United States. Millions of these requests targeting the international mobile subscriber identity (IMSI), a number that identifies a unique user on a mobile network, were sent over several months, and several times per hour on a daily basis to each individual user.”

Briggs added that he had seen information describing how in May 2022, several thousand suspicious SS7 messages were detected, which could have masked a range of attacks—and that he had additional information on the above exploits as well as others that go beyond location tracking, such as the monitoring of message content, the delivery of spyware to targeted devices, and text-message-based election interference.

As a senior CISA official focused on telecom cybersecurity, Briggs has access to information that the general public is not aware of. Therefore his comments should be taken seriously, particularly in light of the concerns expressed by Senator Wyden in his letter to the President, referenced a non-public, independent, expert report commissioned by CISA, and alleged that CISA was “actively hiding information about [SS7 threats] from the American people.” The FCC should investigate these claims, and keep Congress and the public informed about exploitable weaknesses in the telecommunication networks we all use.

These warnings should be taken seriously and their claims should be investigated. The telecoms should submit the results of their audits to the FCC and CISA so that the public can have some reassurance that their security measures are working as they say they are. If the telecoms’ security measures aren’t enough, as Briggs and Miller suggest, then the FCC must step in and secure our national telecommunications network. 

Cooper Quintin

Platforms Have First Amendment Right to Curate Speech, As We’ve Long Argued, Supreme Court Said, But Sends Laws Back to Lower Court To Decide If That Applies To Other Functions Like Messaging

1 week 2 days ago

Social media platforms, at least in their most common form, have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited, the U.S. Supreme Court stated in its landmark decision in Moody v. NetChoice and NetChoice v. Paxton, which were decided together. 

The cases dealt with Florida and Texas laws that each limited the ability of online services to block, deamplify, or otherwise negatively moderate certain user speech.  

Yet the Supreme Court did not strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. 

The Supreme Court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged

This is an important ruling and one that EFF has been arguing for in courts since 2018. We’ve already published our high-level reaction to the decision and written about how it bears on pending social media regulations. This post is a more thorough, and much longer, analysis of the opinion and its implications for future lawsuits. 

A First Amendment Right to Moderate Social Media Content 

 The most important question before the Supreme Court, and the one that will have the strongest ramifications beyond the specific laws being challenged here, is whether social media platforms have their own First Amendment rights, independent of their users’ rights, to decide what third-party content to present in their users’ feeds, recommend, amplify, deamplify, label, or block.  The lower courts in the NetChoice cases reached opposite conclusions, with the 11th Circuit considering the Florida law finding a First Amendment right to curate, and the 5th Circuit considering the Texas law refusing to do so. 

The Supreme Court appropriately resolved that conflict between the two appellate courts and answered this question yes, treating social media platforms the same as other entities that compile, edit, and curate the speech of others, such as bookstores, newsstands, art galleries, parade organizers, and newspapers.  As Justice Kagan, writing for the court’s majority, wrote, “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”   

As the Supreme Court explained,  

Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices—say, by ordering the excluded to be included—it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing—in overriding a private party’s expressive choices—the government confronts the First Amendment. 

The court thus chose to apply the line of precedent from  Miami Herald Co. v. Tornillo —in which the Supreme Court in 1973 struck down a law that required newspapers that endorsed a candidate for office to provide space to that candidate’s opponents to reply—and rejected the line of precedent from PruneYard Shopping Center v. Robins—a 1980 case in which the Supreme Court ruled that  a state court decision that the California Constitution required a particular shopping center to let  a group set up a table and collect signatures when it allowed other groups to do so did not violate the First Amendment. 

In Moody, the Supreme Court explained that the latter rule applied only to situations in which the host itself was not engaged in an inherently expressive activity. That is, a social media platform deciding what user generated content to select and recommend to its users is inherently expressive, but a shopping center deciding who gets to table on your private property is not. 

So, the Supreme Court said, the 11th Circuit got it right and the 5th Circuit did not. Indeed, the 5th Circuit got it very wrong. In the Supreme Court’s words, the 5th Circuit’s opinion “rests on a serious misunderstanding of First Amendment precedent and principle.” 

This is also the position EFF has been making in courts since at least 2018. As we wrote then, “The law is clear that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate the content. The Supreme Court has long held that private publishers have a First Amendment right to control the content of their publications. Miami Herald Co. v. Tornillo, 418 U.S. 241, 254-44 (1974).” 

This is an important rule in several contexts in addition to the state must-carry laws at issue in these cases. The same rule will apply to laws that restrict the publication and recommendation of lawful speech by social media platforms, or otherwise interfere with content moderation. And it will apply to civil lawsuits brought by those whose content has been removed, demoted, or demonetized. 

Applying this rule, the Supreme Court concluded that Texas’s law could not be constitutionally applied against Facebook’s Newsfeed and YouTube’s homepage. (The Court did not specifically address Florida’s law since it was writing in the context of identifying the 5th Circuit’s errors.)

Which Services Have This First Amendment Right? 

But the Supreme Court’s ruling doesn’t make clear which other functions of which services enjoy this First Amendment right to curate. The Supreme Court specifically analyzed only Facebook’s Newsfeed and YouTube’s homepage. It did not analyze any services offered by other platforms or other functions offered through Facebook, like messaging or event management. 

The opinion does, however, identify some factors that will be helpful in assessing which online services have the right to curate. 

  • Targeting and customizing the publication of user-generated content is protected, whether by algorithm or otherwise, pursuant to the company’s own content rules, guidelines, or standards. The Supreme Court specified that it was not assessing whether the same right would apply to personalized curation decisions made algorithmically solely based on user behavior online without any reference to a site’s own standards or guidelines. 
  • Content moderation such as labeling user posts with warnings, disclaimers, or endorsements for all users, or deletion of posts, again pursuant to a site’s own rules, guidelines, or standards, is protected. 
  • The combination of multifarious voices “to create a distinctive expressive offering” or have a “particular expressive quality” based on a set of beliefs about which voices are appropriate or inappropriate, a process that is often “the product of a wealth of choices,” is protected. 
  • There is no threshold of selectivity a service must surpass to have curatorial freedom, a point we argued in our amicus brief. "That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference,” the Supreme Court said. Courts should not focus on the ratio of rejected to accepted posts in deciding whether the right to curate exists: “It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.” 
  • Curatorial freedom exists even when no one is likely to view a platform’s editorial decisions as their endorsement of the ideas in posts they choose to publish. As the Supreme Court said, “this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution.” 

Considering these factors, the First Amendment right will apply to a wide range of social media services, what the Supreme Court called “Facebook Newsfeed and its ilk” or “its near equivalents.” But its application is less clear to messaging, e-commerce, event management, and infrastructure services.

The Court, Finally, Seems to Understand Content Moderation 

Also noteworthy is that in concluding that content moderation is protected First Amendment activity, the Supreme Court showed that it finally understands how content moderation works. It accurately described the process of how social media platforms decide what any user sees in their feed. For example, it wrote:

In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. 

and 

In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages—say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content. 

This comes only a year after Justice Kagan, who wrote this opinion, remarked of the Supreme Court during another oral argument that, “These are not, like, the nine greatest experts on the internet.” In hindsight, that statement seems more of a comment on her colleagues’ understanding than her own. 

Importantly, the Court has now moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Court used that language to describe the process in last term’s case, Twitter v. Taamneh. It is now clear that in the Taamneh case, the court was referring to Twitter’s passive relationship with ISIS, in that Twitter treated it like any other account holder, a relationship that did not support the terrorism aiding and abetting claims made in that case. 

Supreme Court Suggests Competition Law to Address Undue Market Influences 

Another important element of the Supreme Court’s analysis is its treatment of the posited rationale for both states’ speech restrictions: the need to improve or better balance the marketplace of ideas. Both laws were passed in response to perceived censorship of conservative voices, and the states sought to eliminate this perceived political bias from the platform’s editorial practices.  

The Supreme Court found that this was not a sufficiently important reason to limit speech, as is required under First Amendment scrutiny: 

However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others. . . . The government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey. 

But, as EFF has consistently urged in its amicus briefs, in these cases and others, that ruling does not leave states without any way of addressing harms caused by the market dominance of certain services.   

So, it is very heartening to see the Supreme Court point specifically to competition law as an alternative. In the Supreme Court’s words, “Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access." 

While not mentioned, we think this same reasoning supports many data privacy laws as well.  

Nevertheless, the Court Did Not Strike Down Either Law

Despite this analysis, the Supreme Court did not strike down either law. Rather, it sent the cases back to the lower courts to decide whether the lawsuits were proper facial challenges to the law.  

A facial challenge is a lawsuit that argues that a law is unconstitutional in every one of its applications. Outside of the First Amendment, facial challenges are permissible only if there is no possible constitutional application of the law or, as the courts say, the law “lacks a plainly legitimate sweep.” However, in First Amendment cases, a special rule applies: a law may be struck down as overbroad if there are a substantial number of unconstitutional applications relative to the law’s permissible scope. 

To assess whether a facial challenge is proper, a court is thus required to do a three-step analysis. First, a court must identify a law’s “sweep,” that is, to whom and what actions it applies. Second, the court must then identify which of those possible applications are unconstitutional. Third, the court must then both quantitatively and qualitatively compare the constitutional and unconstitutional applications–principal applications of the law, that is, the ones that seemed to be the law’s primary targets, may be given greater weight in that balancing. The court will strike down the law only if the unconstitutional applications are substantially greater than the constitutional ones.  

The Supreme Court found that neither court conducted this analysis with respect to either the Florida or Texas law. So, it sent both cases back down so the lower courts could do so. Its First Amendment analysis set forth above was to guide the courts in determining which applications of the laws would be unconstitutional. The Supreme Court finds that the Texas law cannot be constitutionally applied to Facebook’s Newsfeed of YouTube’s homepage—but the lower court now needs to complete the analysis. 

While these limitations on facial challenges have been well established for some time, the Supreme Court’s focus on them here was surprising because blatantly unconstitutional laws are challenged facially all the time.  

Here, however, the Supreme Court was reluctant to apply its First Amendment analysis beyond large social media platforms like Facebook’s Newsfeed and its close equivalents. The Court was also unsure whether and how either law would be applied to scores of other online services, such as email, direct messaging, e-commerce, payment apps, ride-hailing apps, and others. It wants the lower courts to look at those possible applications first. 

This decision thus creates a perverse incentive for states to pass laws that by their language broadly cover a wide range of activities, and in doing so make a facial challenge more difficult.

For example, the Florida law defines covered social media platforms as "any information service, system, Internet search engine, or access software provider that does business in this state and provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site” which has either gross annual revenues of at least $100 million or at least 100 million monthly individual platform participants globally.

Texas HB20, by contrast, defines “social media platforms,” as “an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images,” and specifically excludes ISPs, email providers, online services that are nor primarily composed of user-generated content, and to which the social aspects are incidental to a service’s primary purpose.  

Does this Make the First Amendment Analysis “Dicta”? 

Typically, language in a higher court’s opinion that is necessary to its ultimate ruling is binding on lower courts, while language that is not necessary is merely persuasive “dicta.” Here, the Supreme Court’s ruling was based on the uncertainty about the propriety of the facial challenge, and not the First Amendment issues directly. So, there is some argument that the First Amendment analysis is persuasive but not binding precedent. 

However, the Supreme Court could not responsibly remand the case back to the lower courts to consider the facial challenge question without resolving the split in the circuits, that is, the vastly different ways in which the 5th and 11th Circuits analyzed whether social media content curation is protected by the First Amendment. Without that guidance, neither court would know how to assess whether a particular potential application of the law was constitutional or not. The Supreme Court’s First Amendment analysis thus seems quite necessary and is arguably not dicta. 

 And even if the analysis is merely persuasive, six of the justices found that the editorial and curatorial freedom cases like Miami Herald Co v. Tornillo applied. At a minimum, this signals how they will rule on the issue when it reaches them again. It would be unwise for a lower court to rule otherwise, at least while those six justices remain on the Supreme Court. 

What About the Transparency Mandates

Each law also contains several requirements that the covered services publish information about their content moderation practices. Only one type of these provisions was part of the cases before the Supreme Court, a provision from each law that required covered platforms to provide the user with notice and an explanation of certain content moderation decisions.

Heading into the Supreme Court, it was unclear what legal standard applied to these speech mandates. Was it the undue burden standard, from a case called Zauderer v. Office of Disciplinary Counsel, that applies to mandated noncontroversial and factual disclosures in advertisements and other forms of commercial speech, or the strict scrutiny standard that applies to other mandated disclosures?

The Court remanded this question with the rest of the case. But it did imply, without elaboration, that the Zauderer “undue burden” standard each of the lower courts applied was the correct one.

Tidbits From the Concurring Opinions 

All nine justices on the Supreme Court questioned the propriety of the facial challenges to the laws and favored remanding the cases back to the lower courts. So, officially the case was a unanimous 9-0 decision. But there were four separate concurring opinions that revealed some differences in reasoning, with the most significant difference being that Justices Alito, Thomas, and Gorsuch disagreed with the majority’s First Amendment analysis.

Because a majority of the Supreme Court, five justices, fully supported the First Amendment analysis discussed above, the concurrences have no legal effect. There are, however, some interesting tidbits in them that give hints as to how the justices might rule in future cases.

  • Justice Barrett fully joined the majority opinion. She wrote a separate concurrence to emphasize that the First Amendment issues may play out much differently for services other than Facebook’s Newsfeed and YouTube’s homepage. She expressed a special concern for algorithmic decision-making that does not carry out the platform’s editorial policies. She also noted that a platform’s foreign ownership might affect whether the platform has First Amendment rights, a statement that pretty much everyone assumes is directed at TikTok. 
  • Justice Jackson agreed with the majority that the Miami Herald line of cases was the correct precedent and that the 11th Circuit’s interpretation of the law was correct, whereas the 5th Circuit’s was not. But she did not agree with the majority decision to apply the law to Facebook’s Newsfeed and YouTube’s home page. Rather, the lower courts should do that. She emphasized that the law might be applied differently to different functions of a single service.
  • Justice Alito, joined by Thomas and Gorsuch, emphasized his view that the majority’s First Amendment analysis is nonbinding dicta. He criticized the majority for undertaking the analysis on the record before it. But since the majority did so, he expressed his disagreement with it. He disputed that the Miami Herald line of cases was controlling and raised the possibility that the common carrier doctrine, whereby social media would be treated more like telephone companies, was the more appropriate path. He also questioned whether algorithmic moderation reflects any human’s decision-making and whether community moderation models reflect a platform’s editorial decisions or viewpoints, as opposed to the views of its users.
  • Justice Thomas fully agreed with Justice Alito but wrote separately to make two points. First, he repeated a long-standing belief that the Zauderer “undue burden” standard, and indeed the entire commercial speech doctrine, should be abandoned. Second, he endorsed the common carrier doctrine as the correct law. He also expounded on the dangers of facial challenges. Lastly, Justice Thomas seems to have moved off, at least a little, his previous position that social media platforms were largely neutral pipes that insubstantially engaged with user speech.

How the NetChoice opinion will be viewed by lower courts and what influence it will have on state legislatures and Congress, which continue to seek to interfere with content moderation processes, remains to be seen. 

But the Supreme Court has helpfully resolved a central question and provided a First Amendment framework for analyzing the legality of government efforts to dictate what content social media platforms should or should not publish. 

 

 

 

David Greene

Decoding the Courts’ Digital Decisions | EFFector 36.9

1 week 5 days ago

Instead of relaxing for the summer, EFF is in first gear defending your rights online! Catch up on what we're doing with the latest issue of our EFFector newsletter. This time we're sharing updates regarding California law enforcement illegally sharing drivers' location data out-of-state, the heavy burden Congress has to meet to justify a TikTok ban, and the latest Supreme Court ruling regarding platforms first amendment right to dictate what speech they host on their platforms.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.9 - Decoding The Courts' Digital Decisions

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

34 Years Supporting the Wild and Weird World Online

1 week 6 days ago

Oh the stories I could tell you about EFF's adventures anchoring the digital rights movement. Clandestine whistleblowers. Secret rooms. Encryption cracking. Airships over mass spying facilities. Even appearances from a badger, a purple dinosaur, and an adorable toddler dancing to Prince. EFF emerged as a proud friend to creators and users alike in this wild and weird world online—and we’re still at it.

Thank you for supporting EFF in our mission to ensure that technology supports freedom, justice, and innovation for all people of the world.

Today the Electronic Frontier Foundation commemorates its 34th anniversary of battling for your digital freedom. It’s important to glean wisdom from where we have been, but at EFF we're also strong believers that this storied past helps us build a positive future. Central to our work is supporting the unbounded creativity on the internet and the people who are, even today, imagining what a better world looks like.

That’s why EFF’s lawyers, activists, policy analysts, and technologists have been on your side since 1990. I’ve seen magical things happen when you—not the companies or governments around you—can determine how you engage with technology. When those stars align, social movements can thrive, communities can flourish, and the internet’s creativity blossoms.

The web plays a crucial role in lifting up the causes you believe in, whatever they may be. These transformative moments are only possible when there is ample space for your privacy, your creativity, and your ability to express yourself freely. No matter where threats may arise, know that EFF is by your side armed with unparalleled expertise and the will to defend the public interest.

I am deeply thankful for people like you who support internet freedom and who value EFF’s role in the movement. It’s a team effort.

One More Day for Summer Treats

Leading up to EFF’s anniversary today, we’ve been having some fun with campfire tales from The Encryptids. We reimagined folktales about cryptids, like Bigfoot and the jackalope, from the perspective of creatures who just want what we all want: a privacy-protective, creative web that lifts users up with technology that respects critical rights and freedoms!

As EFF’s 34th birthday gift to you, I invite you to join EFF for just $20 today and you’ll get two limited-time gifts featuring The Encryptids. On top of that, Craig Newmark Philanthropies will match up to $30,000 for your first year as a monthly or annual Sustaining Donor! Many thanks to Craig—founder of Craigslist and a persistent supporter of digital freedom—for making this possible.

Join EFF

For the Future of Privacy, Security, & Free Expression

We at EFF take our anniversary as an opportunity to applaud our partners, celebrate supporters like you, and appreciate our many successes for privacy and free expression. But we never lose sight of the critical job ahead. Thank you for supporting EFF in our mission to ensure that technology supports freedom, justice, and innovation for all people of the world.

Cindy Cohn

To Sixth Circuit: Government Officials Should Not Have Free Rein to Block Critics on Their Social Media Accounts When Used For Governmental Purposes

1 week 6 days ago

Legal intern Danya Hajjaji was the lead author of this post.

The Sixth Circuit must carefully apply a new “state action” test from the U.S. Supreme Court to ensure that public officials who use social media to speak for the government do not have free rein to infringe critics’ First Amendment rights, EFF and the Knight First Amendment Institute at Columbia University said in an amicus brief.

The Sixth Circuit is set to re-decide Lindke v. Freed, a case that was recently remanded from the Supreme Court. The lawsuit arose after Port Huron, Michigan resident Kevin Lindke left critical comments on City Manager James Freed's Facebook page. Freed retaliated by blocking Lindke from being able to view, much less continue to leave critical comments on, Freed’s public profile. The dispute turned on the nature of Freed’s Facebook account, where updates on his government engagements were interwoven with personal posts.

Public officials who use social media as an extension of their office engage in “state action,” which refers to acting on the government’s behalf. They are bound by the First Amendment and generally cannot engage in censorship, especially viewpoint discrimination, by deleting comments or blocking citizens who criticize them. While social media platforms are private corporate entities, government officials who operate interactive online forums to engage in public discussions and share information are bound by the First Amendment.

The Sixth Circuit initially ruled in Freed’s favor, holding that no state action exists due to the prevalence of personal posts on his Facebook page and the lack of government resources, such as staff members or taxpayer dollars, used to operate it.  

The case then went to the U.S. Supreme Court, where EFF and the Knight Institute filed a brief urging the Court to establish a functional test that finds state action when a government official uses a social media account in furtherance of their public duties, even if the account is also sometimes used for personal purposes.

The U.S. Supreme Court crafted a new two-pronged state action test: a government official’s social media activity is state action if 1) the official “possessed actual authority to speak” on the government’s behalf and 2) “purported to exercise that authority” when speaking on social media. As we wrote when the decision came out, this state action test does not go far enough in protecting internet users who intereact with public officials online. Nevertheless, the Court has finally provided further guidance on this issue as a result.

Now that the case is back in the Sixth Circuit, EFF and the Knight Institute filed a second brief endorsing a broad construction of the Supreme Court’s state action test.

The brief argues that the test’s “authority” prong requires no more than a showing, either through written law or unwritten custom, that the official had the authority to speak on behalf of the government generally, irrespective of the medium of communication—whether an in-person press conference or social media. It need not be the authority to post on social media in particular.

For high-ranking elected officials (such as presidents, governors, mayors, and legislators) courts should not have a problem finding that they have clear and broad authority to speak on government policies and activities. The same is true for heads of government agencies who are also generally empowered to speak on matters broadly relevant to those agencies. For lower-ranking officials, courts should consider the areas of their expertise and whether their social media posts in question were related to subjects within, as the Supreme Court said, their “bailiwick.”

The brief also argues that the test’s “exercise” prong requires courts to engage in, in the words of the Supreme Court, a “fact-specific undertaking” to determine whether the official was speaking on social media in furtherance of their government duties.

This element is easily met where the social media account is owned, created, or operated by the office or agency itself, rather than the official—for example, the Federal Trade Commission’s @FTC account on X (formerly Twitter).

But when an account is owned by the person and is sometimes used for non-governmental purposes, courts must look to the content of the posts. These include those posts from which the plaintiff’s comments were deleted, or any posts the plaintiff would have wished to see or comment on had the official not blocked them entirely. Former President Donald Trump is a salient example, having routinely used his legacy @realDonaldTrump X account, rather than the government-created and operated account @POTUS, to speak in furtherance of his official duties while president.

However, it is often not easy to differentiate between personal and official speech by looking solely at the posts themselves. For example, a social media post could be either private speech reflecting personal political passions, or it could be speech in furtherance of an official’s duties, or both. If this is the case, courts must consider additional factors when assessing posts made to a mixed-use account. These factors can be an account’s appearance, such as whether government logos were used; whether government resources such as staff or taxpayer funds were used to operate the social media account; and the presence of any clear disclaimers as to the purpose of the account.

EFF and the Knight Institute also encouraged the Sixth Circuit to consider the crucial role social media plays in facilitating public participation in the political process and accountability of government officials and institutions. If the Supreme Court’s test is construed too narrowly, public officials will further circumvent their constitutional obligations by blocking critics or removing any trace of disagreement from any social media accounts that are used to support and perform their official duties.

Social media has given rise to active democratic engagement, while government officials at every level have leveraged this to reach their communities, discuss policy issues, and make important government announcements. Excessively restricting any member of the public’s viewpoints threatens public discourse in spaces government officials have themselves opened as public political forums.

Sophia Cope

Beyond Pride Month: Protections for LGBTQ+ People All Year Round

1 week 6 days ago

The end of June concluded LGBTQ+ Pride month, yet the risks LGBTQ+ people face persist every month of the year. This year, LGBTQ+ Pride took place at a time of anti-LGBTQ+ violence, harassment and vandalism and back in May, US officials had warned that LGBTQ+ events around the world might be targeted during Pride Month. Unfortunately, that risk is likely to continue for some time. So too will activist actions, community organizing events, and other happenings related to LGBTQ+ liberation. 

We know it feels overwhelming to think about how to keep yourself safe, so here are some quick and easy steps you can take to protect yourself at in-person events, as well as to protect your data—everything from your private messages with friends to your pictures and browsing history.

There is no one-size-fits-all security solution to protect against everything, and it’s important to ask yourself questions about the specific risks you face, balancing their likelihood of occurrence with the impact if they do come about. In some cases, the privacy risks brought about by technologies may actually be worth risking for the convenience that they offer. For example, is it more of a risk to you that phone towers are able to identify your cell phone’s device ID, or that you have your phone turned on and handy to contact others in the event of danger? Carefully thinking through these types of questions is the first step in keeping yourself safe. Here’s an easy guide on how to do just that.

Tips For In-Person Events And Protests


For your devices:

  • Enable full disk encryption for your device to ensure all files across your entire device cannot be accessed if taken by law enforcement or others.
  • Install an encrypted messenger app such as Signal (for iOS or Android) to guarantee that only you and your chosen recipient can see and access your communications. Turn on disappearing messages, and consider shortening the amount of time messages are kept in the app when you are actually attending an event. If instead you have a burner device with you, be sure to save the numbers for emergency contacts.
  • Remove biometric device unlock like fingerprint or FaceID to prevent police officers from physically forcing you to unlock your device with your fingerprint or face. You can password-protect your phone instead.
  • Log out of accounts and uninstall apps or disable app notifications to avoid app activity in precarious legal contexts from being used against you, such as using gay dating apps in places where homosexuality is illegal. 
  • Turn off location services on your devices to avoid your location history from being used to identify your device’s comings and goings. For further protections, you can disable GPS, Bluetooth, Wi-Fi, and phone signals when planning to attend a protest.

For you:

  • Wearing a mask during a protest is advisable, particularly as gathering in large crowds increases the risk of law enforcement deploying violent tactics like tear gas, as well as increasing the possibility of being targeted through face recognition technology
  • Tell friends or family when you plan to attend and leave an event so that they can follow up to make sure you are safe if there are arrests, harassment, or violence. 
  • Cover your tattoos to reduce the possibility of image recognition technologies like facial recognition, iris recognition and tattoo recognition identifying you.
  • Wearing the same clothing as everyone in your group can help hide your identity during the protest and keep you from being identified and tracked afterwards. Dressing in dark and monochrome colors will help you blend into a crowd.
  • Say nothing except to assert your rights if you are arrested. Without a warrant, law enforcement cannot compel you to unlock your devices or answer questions, beyond basic identification in some jurisdictions. Refuse consent to a search of your devices, bags, vehicles, or home, and wait until you have a lawyer before speaking.

Given the increase in targeted harassment and vandalism towards LGBTQ+ people, it’s especially important to consider counterprotesters showing up at various events. Since the boundaries between parade and protest might be blurred, you must take precautions. Our general guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We also have a handy printable version available here.

LGBTQ+ Pride is about recognition of our differences and claiming honor in our presence in public spaces. Because of this, it’s an odd thing to have to take careful privacy precautions to keep yourself safe during Pride events. Consider it like you would any aspect of bodily autonomy and self determination—only you get to decide what aspects of yourself you share with others. You get to decide how you present to the world and what things you keep private. With a bit of care, you can maintain privacy, safety, and pride in doing so.

Paige Collings

UN Draft Cybercrime Treaty Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

2 weeks ago

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards. 

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make critical improvements to the draft text. Delegates meeting in New York from July 29 to August 9 must finalize the convention’s text that, if adopted, will expand surveillance laws dramatically and weaken human rights safeguards significantly. This proposed UN treaty is not a cybercrime treaty; it is an expansive global surveillance pact.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its  scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter that the treaty as it was currently drafted  must be rejected  and amended to protect privacy and data protection rights—none of which have been made in the latest version of the proposed Convention.

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text  in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum. However, in keeping with the principle in multilateral negotiations that nothing is agreed until everything is agreed, any provisions of the draft that have already been agreed could potentially be reopened. 

An updated draft, dated May 23, 2024, but released on June 14th, is far from settled, though. Tremendous disagreements still exist among countries on crucial issues, including the scope of cross border surveillance powers and protection of human rights. Nevertheless, some countries expect the latest draft  to be adopted. 

Earlier drafts included criminalization of a wide range of speech, and a number of non-cyber crimes. Just when we thought Member States had succeeded in removing many of the most concerning crimes from the convention’s text, they could be making a reappearance. The Ad-Hoc Committee Chair’s proposed General Assembly resolution includes a promise of two additional sessions to negotiate an amendment with more crimes: “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.”

Let us be clear: Without robust mandatory data protection and privacy safeguards, the updated draft is bad news for people around the world. It will exacerbate existing disparities in human rights protections, potentially allowing increased government overreach, unchecked surveillance, and access to sensitive data that will leave individuals vulnerable to privacy and data protection violations, human rights abuses, or transnational repression. Critical privacy safeguards continue to be woefully inadequate, and there are no explicit data protection principles in the text itself.

In this third post, we explore  problems caused by the expansive definition of “electronic data,” combined with the lack of mandatory privacy and data protection safeguards in the proposed convention. This term has a very broad and vague reach. It appears to include sensitive personal data, like biometric identifiers, which could be accessed by police without adequate protections and under weak privacy safeguards. Worse, it could then be shared with other governments. This poses significant risks for refugees, human rights defenders, and anyone who travels across borders. Instead of this race to the bottom, we call for ironclad privacy and data protection principles in the text to thwart abuses.

Key Surveillance Powers Involving Electronic Data

Chapter IV of the draft, which deals with criminal procedural measures, creates a wide range of government powers to monitor and access people’s digital systems and data, focusing mainly on “subscriber data,” “traffic data,” and “content data.” These powers can be broadly described as forms of communications surveillance or surveillance of communications data. Traditionally, the invasiveness of communications surveillance has been evaluated on the basis of such artificial and formalistic categories.

The revised draft introduces a catch-all category called "Electronic Data" in Article 2(b), defined as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function." 

  • Electronic data includes non-communication data, information that hasn’t been communicated to someone else or stored with a service provider. This extremely broad definition includes all forms of digital data, which in other contexts would enjoy specific protections based on the nature or origin of that data.
  • For example, sensitive data such as biometric identifiers should require more stringent processes before being accessed due to the significant risks if collected without proper protections.
  • Additionally, data related to interactions with one’s attorney or doctor is subject to legal privileges. These privileges are designed to ensure that individuals can communicate openly and honestly with their legal and medical professionals without fear that their private information will be exposed or used against them in legal proceedings. However, the current draft does not distinguish between the sensitivity of different types of 'electronic data' or mandate robust privacy or data protection principles accordingly.
  • This definition  includes an array of electronic data types that can be processed, stored, or transmitted by ICT systems—ranging from text, images, documents, and biometric identifiers, to software programs and databases. Examples of electronic data in emerging technologies could include training data sets used for machine learning models, including images, text, and structured data; transaction records and smart contracts stored in blockchain networks; sensor data collected from smart devices such as temperature readings, motion detection, and environmental monitoring data; 3D models, spatial data, and user interaction logs used to create immersive experiences; among others. It also includes sensitive information about people that might not always be interpreted as communication data, such as biometric identifiers and neural data, among others.

    Three investigative powers—preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28)—relate to this broader category of “electronic data.” When data is stored, regardless of how it would have been classified for communications surveillance purposes, the Articles 25, 27, and 28 powers can be used to target it. That includes stored information that would have been regarded as subscriber, traffic, or content data in a communications surveillance context, as well as information (like on-device metadata or recordings, or a diary created locally but never shared) that could not be a target of communications surveillance at all. In other words, the categories “traffic data,” “subscriber information,” and “content data” apply to communications surveillance, but these categories are not used—and no such distinctions are drawn—in the context of the preservation, production, and search and seizure powers of stored electronic data.

    While there’s consensus that communications content deserves significant protection in law because of its capability to reveal sensitive information, it is now clear that other non-communication data, including those arising from a variety of “electronic data,” may reveal even more sensitive data about an individual than the content itself, and thus deserves at least equivalent level protection. The processing of this very sensitive electronic data, coupled with the absence of mandatory robust data protection principles and robust human rights safeguards in the convention itself, raises significant concerns about overreach, privacy invasion, and the unchecked power it grants to police.

    Today, these types of information might, taken alone or analyzed collectively, reveal a person’s identity, behavior, associations, physical or medical conditions, race, color, sexual orientation, national origins, or viewpoints. Emerging technologies illustrate these risks clearly. For instance, data from wearable health devices can disclose detailed medical conditions and physical activity patterns; smart home devices can track daily routines and behaviors; and social media analytics can infer political views, social connections, and personal preferences based on patterns of interactions, posts, and likes. Other body-worn sensors like those in augmented reality devices may reveal physiological information related to conscious and unconscious emotional reactions to things we see, hear, or do.

    Additionally, geolocation data from smartphones and Internet of Things (IoT) devices can map an individual’s movements over time, potentially identifying their location history, frequented places, and daily commutes, as well as patterns of whom they spent time with. Photo, video surveillance and face recognition data used in public and private spaces can identify individuals and track their interactions, while biometric data from various other sources can confirm identities and provide access to sensitive personal information.

    As a result, all data, including electronic data, should be given the highest protection in the proposed convention to safeguard individual privacy and prevent misuse amid the rise of emerging technologies. But the existing convention text gives individual countries huge discretion in what kind of protection to afford to people’s data when implementing these powers. As elsewhere, we should have mandatory privacy safeguards (not just what domestic law might conclude is “appropriate” under Article 24) providing strong limits and oversight for access to all sorts of sensitive data.

    Finally, the proposed convention’s vaunted “technological neutrality” also means that there is no built-in mechanism for imposing any new safeguards or restrictions on government access to new kinds of sensitive data in the future. If new technologies are more intimately connected with our bodies, brains, and activities than old technologies, or if they mediate more and more of our social or political lives, the proposed convention does not provide any road map to making the data they produce any harder for police to access.
Like Communication Surveillance Powers, Powers Related to "Electronic Data" All Lack Clear and Robust Privacy and Data Protection Safeguards


All three powers referring to “electronic data” share a problem which we’ve previously seen in other powers related to communications surveillance: none of them include clear mandatory privacy and data protection safeguards to limit how the powers are used. All of the investigative powers in Chapter IV of the draft convention rely on national laws to determine whether or not restrictions that govern them are “appropriate,” leaving out numerous international law standards that ought to be made explicit.

For the “electronic data” powers discussed below, this is equally alarming because these powers can potentially authorize law enforcement to obtain literally anything stored in any computer or digital storage medium. There are no kinds of data that are inherently off-limits in the the text of the convention itself (such as a rule that requests may not compel self-incrimination, or that they must respect privileges such as attorney-client privilege or doctor-patient privilege), nor even any that necessarily require prior judicial authorization to obtain, leaving such decisions to the discretion of national law.

Domestic Expedited Preservation Orders of Electronic Data
  • Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, will result in individuals’ data being preserved and available for use in prosecutions far more than needed, and fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it also risks spreading familiar deficiencies in U.S. law worldwide. Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered. 
  • Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely. Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations. The article fail to limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

  • Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting  service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation. As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention. 

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards, including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency, preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic. This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the UD.

As we explain above, the extremely  broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it.  This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The draft convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

Search and Seizure of Stored Electronic Data 

The draft's Article 28, also agreed ad referendum, gives governments sweeping powers to search and seize electronic data, but without clear, mandatory privacy and data protection safeguards, poses a serious threat to privacy and data protection. Article 24 provides some limitations, but they are vague and insufficient, leaving much to the discretion of national laws, and subject to what each country deems “appropriate.” This could lead to significant privacy violations and misuse of sensitive personal information. 

  • Search or Access: Article 28(1) is a search-and-seizure power that applies to any “electronic data” in an information and communications technology (ICT) system (28(1)(a)) or data storage medium (28(1)(b)). Just as with the prior articles, it doesn’t include specific restrictions on these searches and doesn’t limit what may be targeted, for what purposes, or under what conditions. For example, this could allow authorities to access all files and data on a suspect’s personal computer, mobile device, or cloud storage account.

  • Expanding the Search: Article 28(2) allows authorities to search additional systems if they have grounds to believe the data sought is accessible from the initially searched system. While prior judicial authorization must be a requirement so the judge can assess the necessity and proportionality of the search, Article 24 only mandates appropriate conditions and safeguards without explicit judicial authorization. In the US, for example, this power triggers Fourth Amendment protections, which require particularity—specifying the place to be searched and the items to be seized—in search warrants to prevent unreasonable searches and seizures​, Article 28(3) empowers authorities to seize or secure electronic data accessed under the previous provisions, including making and retaining copies of electronic data, maintaining its integrity, and rendering it inaccessible or removing it from the system.

  • Seizure or Securing Data: Article 28(3)(d) specifically allows authorities to “[r]ender inaccessible or remove those electronic data in the accessed information and communications technology system.” For instance, authorities could copy and store all emails and documents from a suspect’s cloud storage service and then delete them from the original source.

Additionally, Article 28(3)(d) raises additional significant free expressions and security concerns. 

  • First, it seems to allow a court order to permanently destroy the only copy of some data, as there is no requirement to make a backup or to be prepared to restore the data later if there is no court process or the person is not convicted of a crime.
  • Second, with regard to publicly accessible data, this is a form of takedown process that can implicate free expression concerns. Articles 5 and 24 help mitigate these concerns. By applying these safeguards, Articles 5 and 24 aim to ensure that the implementation of Article 28(3)(d) does not infringe on free expression or result in disproportionate actions. However, due to the deficiencies in these articles, it remains to be seen how they will be applicable in practice. 

As we have written before, Article 24, on conditions and safeguards, fails to protect human rights, by deferring safeguards to national law, rather than laying out strong protections to match the increased powers that the proposed convention provides. It fails to explicitly include crucial principles like legality, necessity, and non-discrimination. Effective human rights protections require prior judicial approval before surveillance is conducted, transparency about actions taken, notifying users when their data is accessed if it does not jeopardize the investigation, and providing ways for individuals to challenge abuses. By deferring those safeguards to national law, Article 24 weakens these protections, as national laws can vary greatly and may not always provide the necessary safeguards. 

A safeguard in a treaty that defers to national laws risks inconsistency and abuse. Strong protections in some nations may be undermined by weaker laws in others, ultimately failing to provide the promised protection.

This creates a race to the bottom in human rights standards, where the weakest domestic laws set the global norm, jeopardizing privacy, data protection, and fundamental freedoms that the United Nations treaty aims to uphold. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. (While this post focuses on the safeguards for electronic data, equally concerning is the treatment of communication data, particularly subscriber data and traffic data, which also lacks robust protections and brings up concerning risks.)

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. For example, if Country A is investigating a crime and suspects that relevant data is stored on servers in Country B, Country A can request Country B to preserve this data to prevent it from being deleted or altered before Country A can formally request access to it. Country A may use the 24/7 network as outlined in Article 41(3)(c) to seek information about the data’s location and the service provider. 

    The 24/7 network significantly extends its role beyond merely preserving stored electronic data in Articles 41(3)(c) & (d). The network 24/7 is also empowered to collect evidence when provided legal information, and locate suspects, as well as provide electronic data to avert emergencies if "permitted by the domestic law and practice of the requested Country. Alarmingly, Article 24, which sets out conditions and safeguards, does not apply to the powers exercised by the 24/7 Network. This absence of oversight means that the network can operate without the necessary checks and balances, potentially leading to abuses of power.

    It is important to note that Article 23(4) regarding the scope of application of the domestic criminal procedural measures (Chapter IV) only authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). While one could argue that powers in Chapter V closely matching those in Chapter IV should be subject to the same safeguards, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41), do not specifically reference the corresponding Chapter IV powers. Consequently, they may not be covered by Article 24 safeguards. This leaves critical aspects, such as handling electronic data in an emergency or turning over subscriber information and location, without adequate human rights protections. Furthermore, Article 47 on law enforcement cooperation highlights the extensive sharing and exchange of sensitive data, emphasizing the risks of misuse.

  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.
Ironclad Data Protection Principles Are Essential for the Proposed Convention

The basic powers for domestic surveillance are not new and are relatively straightforward, but the introduction of an international convention granting authorities new access to sensitive data—especially across borders—demands stringent data protection measures. 

  • Data processing must be lawful and fair.
  • Data should be collected only for specified, explicit, and legitimate purposes and not processed further in a way incompatible with those purposes.
  • Data collection must be minimized so that it’s adequate, relevant, and not excessive in relation to the government’s specific stated purposes.
  • Data should be accurate and kept up to date.
  • Data must not be kept longer than absolutely necessary.
  • Data must be protected against unauthorized access and breaches.
  • Individuals should be able to access information about the processing of their own personal data.
  • Individuals should be informed about how their data is being used, the purpose of processing, and their rights.
  • Data controllers must demonstrate compliance with data protection principles, with accountability mechanisms in place to hold them responsible for violations.

Respecting human rights is not only a legal obligation but also a practical necessity for law enforcement. As the Office of the High Commissioner for Human Rights (OHCHR) said in “Human Rights and Law Enforcement: A Trainer’s Guide on Human Rights for the Police,” law enforcement agencies’ effectiveness is improved when they respect human rights. Moreover, as the Vienna Declaration and Programme of Action note, “The administration of justice, including law enforcement (...) agencies, (...) in full conformity with applicable standards contained in international human rights instruments, [is] essential to the full and non-discriminatory realization of human rights and indispensable to the process of democracy and sustainable development.”

Conclusion

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

 Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

Katitza Rodriguez

Hundreds of Tech Companies Want to Cash In on Homeland Security Funding. Here's Who They Are and What They're Selling.

2 weeks ago

This post was co-written by EFF research intern Andrew Zuker.

Whenever government officials generate fear about the U.S.-Mexico border and immigration, they also generate dollars–hundreds of millions of dollars–for tech conglomerates and start-ups.

The Electronic Frontier Foundation (EFF) today has released the U.S. Border-Homeland Security Technology Dataset, a multilayered dataset of the vendors who supply or market the technology for the U.S. government’s increasingly AI-powered homeland security efforts, including the so-called “virtual wall” of surveillance along the southern border with Mexico.

The four-part dataset includes a hand-curated directory that profiles more than 230 companies that manufacture, market or sell technology products and services, including DNA-testing, ground sensors, and counter-drone systems, to U.S. Department of Homeland Security (DHS) components engaged in border security and immigration enforcement. Vendors on this list are either verified federal contract holders, or have sought to do business with immigration/border authorities or local law enforcement along the border, through activities such as advertising homeland security products on their websites and exhibiting at border security conferences.

It features companies often in the spotlight, including Elbit Systems and Anduril Industries, but also lesser-known contractors, such as surveillance vendors Will-Burt Company and Benchmark. Many companies also supply the U.S. Department of Defense as part of the pipeline from battlefields to the borderlands.

The spreadsheet includes a separate list of 463 companies that have registered for Customs and Border Protection (CBP) and Immigration and Customs Enforcement "Industry Day" events and a roster of 134 members of the DHS-founded Homeland Security Technology Consortium. Researchers will also find a compilation of the annual Top 100 contractors to DHS and its components dating back to 2006.

Download the dataset as an XLSX file through this link or access it as a Google Sheet (Google's Privacy Policy applies).

Border security and surveillance is a rapidly growing industry, fueled by the potential of massive congressional appropriations and accelerated by the promise of artificial intelligence. Of the 233 companies included in our initial survey, two-thirds promoted artificial intelligence, machine learning, or autonomous technology in their public-facing materials.

An HDT Global vehicle at the 2024 Border Security Expo. Source: Dugan Meyer (CC0 1.0 Universal)

Federal spending on homeland security has increased year over year, creating a lucrative market which has attracted investment from big tech and venture capital. Just last month, U.S. Rep. Mark Amodei, Chair of the House Appropriations Homeland Security Subcommittee, defended a funding package that included a "record-level" $300 million in funding for border security technology, including "autonomous surveillance towers; mobile surveillance platforms; counter-tunnel equipment, and a significant investment in counter-drone capability." 

This research project was made possible with internship support from the Heinrich Böll Foundation, in collaboration with EFF and the Reynolds School of Journalism at the University of Nevada, Reno.

Drew Mitnick of the Böll Foundation, who was also involved in building a similar data set of European vendors, says mapping the homeland security technology industry is essential to public debate. "We see the value of the project will be to better inform policymakers about the types of technology deployed, the privacy impact, the companies operating the technology, and the nature of their relationships with the agencies that operate the technology," he said.​

Information for this project was aggregated from a number of sources including press releases, business profile databases, vendor websites, social media, flyers and marketing materials, agency websites, defense industry publications, and the work of journalists, advocates, and watchdogs, including the Electronic Frontier Foundation and the student researchers who contribute to EFF’s Atlas of Surveillance. For our vendor profiles, we verified agency spending with each vendor using financial records available online through both the Federal Procurement Data System (FPDS.gov), and USAspending.gov websites.

While many of the companies included have multiple divisions and offer a range of goods and services, this project is focused specifically on vendors who provide and market technology, communications, and IT capabilities for DHSsub-agencies, including CBP, ICE and Citizenship and Immigration Services (CIS). We have also included companies that sell to other agencies operating at the border, such as the Drug Enforcement Administration and state and local law enforcement agencies engaged in border enforcement.

The data is organized by vendor and includes information on the type of technology or services they offer, the vendor’s participation in specific federal border security initiatives, procurement records, the company's website, parent companies and related subsidiaries, specific surveillance products offered, and which federal agencies they serve. Additional links and supporting documents have been included throughout. We have also provided links to scans of promotional materials distributed at border security conferences.

This dataset serves as a snapshot of the homeland security industry. While we set out to be exhaustive, we discovered the corporate landscape is murky with acquisitions, mergers, holding companies, and sub-sub-contractors that often intentionally obscure the connections between the various enterprises attempting to rake in lucrative government contracts. We hope that by providing a multilayered view, this data will serve as a definitive resource for journalists, academics, advocates of privacy and human rights, and policymakers. 

This work should be the starting point for further investigation—such as Freedom of Information Act requests and political influence analysis—into the companies and agencies rapidly expanding and automating surveillance and immigration enforcement, whether the aim is to challenge a political narrative or to hold authorities and the industry accountable.

If you use this data in your own research or have information that would further enrich the dataset, we'd love to hear from you at aos@eff.org.

Dave Maass

Craig Newmark Philanthropies Matches EFF's Monthly Donors

2 weeks 4 days ago

Craig Newmark Philanthropies will match up to $30,000 for your entire first year as a new monthly or annual EFF Sustaining Donor! Many thanks to Craig Newmark—founder of craigslist and a persistent supporter of digital freedom— for making this possible. This generous matching challenge bolsters celebrations for EFF's 34th anniversary on July 10 as well as EFF's ongoing summer membership drive: be a member for as little as $20 and get rare gifts featuring The Encryptids (including a Bigfoot enamel pin!).

Since its founding in 1990, the Electronic Frontier Foundation has relied on member support to power its public interest legal work, advocacy, and technology development. To wit, more than half of EFF's funding comes from small dollar donors around the world, and EFF's community of monthly and annual Sustaining Donors play a crucial role in keeping the organization running strong. Sustaining Donors giving $10 or less each month raised over $400,000 for EFF last year. Every member and every cent counts. This free donation matching offer from Craig Newmark Philanthropies takes EFF supporters' donations even further at a time when many households are especially conscious of their finances.

Sustaining Donors giving $10 or less each month raised over $400,000 for EFF last year. Every member and every cent counts.

Over the past several years, grants from Craig Newmark Philanthropies have focused on supporting trustworthy journalism to defend our democracy and hold the powerful accountable, as well as cybersecurity to protect consumers and journalists alike from malware and other dangers online. With over 30 years of donor support from Newmark, EFF built networks to help defend against disinformation warfare, fought online harassment, strengthened ethical journalism, and researched state-sponsored malware, cyber-mercenaries, and consumer spyware. EFF’s Threat Lab conducts research on surveillance technologies used to target journalists, communities, activists, and individuals. For example, EFF helped co-found, and continues to provide leadership to the Coalition Against Stalkerware. EFF also created and updated tools to educate and train working and student journalists alike to keep themselves safe from adversarial attacks. In addition to maintaining our popular Surveillance Self Defense guide, EFF scaled up the Report Back tool for student journalists, cybersecurity students, and grassroots volunteers to collaboratively study technology in society.

'Fix Copyright' member t-shirts. Creativity is fun for the whole family.

With this generous matching challenge from Craig Newmark Philanthropies, we are pleased to double the impact of our recurring Sustaining Donors and let all digital rights supporters know that we're in this together. EFF is deeply grateful to its passionate members and everyone who values a brighter future for privacy, security, and free expression.

Become a Sustaining Donor

Double Your Impact on Digital Rights today

Aaron Jue
Checked
2 hours 27 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed